[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZyFJwtPzmKsx5EdI@google.com>
Date: Tue, 29 Oct 2024 20:46:58 +0000
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Tejun Heo <tj@...nel.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Luca Boccassi <bluca@...ian.org>,
kvm@...r.kernel.org, cgroups@...r.kernel.org,
Michal Koutný <mkoutny@...e.com>,
linux-kernel@...r.kernel.org
Subject: Re: cgroup2 freezer and kvm_vm_worker_thread()
On Mon, Oct 28, 2024 at 02:07:36PM -1000, Tejun Heo wrote:
11;rgb:fcc2/f732/e5d0> Hello,
>
> Luca is reporting that cgroups which have kvm instances inside never
> complete freezing. This can be trivially reproduced:
>
> root@...t ~# mkdir /sys/fs/cgroup/test
> root@...t ~# echo $fish_pid > /sys/fs/cgroup/test/cgroup.procs
> root@...t ~# qemu-system-x86_64 --nographic -enable-kvm
>
> and in another terminal:
>
> root@...t ~# echo 1 > /sys/fs/cgroup/test/cgroup.freeze
> root@...t ~# cat /sys/fs/cgroup/test/cgroup.events
> populated 1
> frozen 0
> root@...t ~# for i in (cat /sys/fs/cgroup/test/cgroup.threads); echo $i; cat /proc/$i/stack; end
> 2070
> [<0>] do_freezer_trap+0x42/0x70
> [<0>] get_signal+0x4da/0x870
> [<0>] arch_do_signal_or_restart+0x1a/0x1c0
> [<0>] syscall_exit_to_user_mode+0x73/0x120
> [<0>] do_syscall_64+0x87/0x140
> [<0>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> 2159
> [<0>] do_freezer_trap+0x42/0x70
> [<0>] get_signal+0x4da/0x870
> [<0>] arch_do_signal_or_restart+0x1a/0x1c0
> [<0>] syscall_exit_to_user_mode+0x73/0x120
> [<0>] do_syscall_64+0x87/0x140
> [<0>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> 2160
> [<0>] do_freezer_trap+0x42/0x70
> [<0>] get_signal+0x4da/0x870
> [<0>] arch_do_signal_or_restart+0x1a/0x1c0
> [<0>] syscall_exit_to_user_mode+0x73/0x120
> [<0>] do_syscall_64+0x87/0x140
> [<0>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> 2161
> [<0>] kvm_nx_huge_page_recovery_worker+0xea/0x680
> [<0>] kvm_vm_worker_thread+0x8f/0x2b0
> [<0>] kthread+0xe8/0x110
> [<0>] ret_from_fork+0x33/0x40
> [<0>] ret_from_fork_asm+0x1a/0x30
> 2164
> [<0>] do_freezer_trap+0x42/0x70
> [<0>] get_signal+0x4da/0x870
> [<0>] arch_do_signal_or_restart+0x1a/0x1c0
> [<0>] syscall_exit_to_user_mode+0x73/0x120
> [<0>] do_syscall_64+0x87/0x140
> [<0>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> The cgroup freezing happens in the signal delivery path but
> kvm_vm_worker_thread() thread never call into the signal delivery path while
> joining non-root cgroups, so they never get frozen. Because the cgroup
> freezer determines whether a given cgroup is frozen by comparing the number
> of frozen threads to the total number of threads in the cgroup, the cgroup
> never becomes frozen and users waiting for the state transition may hang
> indefinitely.
>
> There are two paths that we can take:
>
> 1. Make kvm_vm_worker_thread() call into signal delivery path.
> io_wq_worker() is in a similar boat and handles signal delivery and can
> be frozen and trapped like regular threads.
>
> 2. Keep the count of threads which can't be frozen per cgroup so that cgroup
> freezer can ignore these threads.
>
> #1 is better in that the cgroup will actually be frozen when reported
> frozen. However, the rather ambiguous criterion we've been using for cgroup
> freezer is whether the cgroup can be safely snapshotted whil frozen and as
> long as the workers not being frozen doesn't break that, we can go for #2
> too.
>
> What do you guys think?
The general assumption (which is broken here) is that kernel threads are
belonging to the root cgroup, but also they are not safe to freeze, so we're
fine unless a user moves them to another cgroup, which is generally not a good
idea for many reasons.
However in this case we have a kthread which we want to freeze and which belongs
to a non-root cgroup, so I think the option #1 is preferable. Option #2 brings
a notion of special non-freezable threads into a user facing API. Idk if we
really need this, so I'd avoid this.
Thanks!
Powered by blists - more mailing lists