[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <df449a6e-bbe1-4dba-9b05-c2ce746e9163@virtuozzo.com>
Date: Wed, 24 Dec 2025 11:06:47 +0800
From: Pavel Tikhomirov <ptikhomirov@...tuozzo.com>
To: Michal Koutný <mkoutny@...e.com>
Cc: Tejun Heo <tj@...nel.org>, Johannes Weiner <hannes@...xchg.org>,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2] cgroup-v2/freezer: small improvements
On 12/24/25 01:29, Michal Koutný wrote:
> On Tue, Dec 23, 2025 at 06:20:06PM +0800, Pavel Tikhomirov <ptikhomirov@...tuozzo.com> wrote:
>> First allows freezing cgroups with kthreads inside, we still won't
>> freeze kthreads, we still ignore them, but at the same time we allow
>> cgroup to report frozen when all other non-kthread tasks are frozen.
>
> kthreads in non-root cgroups are kind of an antipattern.
> For which kthreads you would like this change? (See for instance the
> commit d96c77bd4eeba ("KVM: x86: switch hugepage recovery thread to
> vhost_task") as a possible refactoring of such threads.)
To explain our usecase, I would need to dive a bit into how Virtuozzo containers (OpenVZ) on our custom Virtuozzo kernel works.
In our case we have two custom kernel threads for each container: "kthreadd" and "umh".
https://bitbucket.org/openvz/vzkernel/src/662c0172a9d4aecf52dbaea4f903ccc801b569b2/kernel/ve/ve.c#lines-481
https://bitbucket.org/openvz/vzkernel/src/662c0172a9d4aecf52dbaea4f903ccc801b569b2/kernel/ve/ve.c#lines-581
The "kthreadd" is used to allow creating per-container kthreads, through it we create kthreads for "umh" (explained below) and to create sunrpc svc kthreads in container.
https://bitbucket.org/openvz/vzkernel/src/662c0172a9d4aecf52dbaea4f903ccc801b569b2/net/sunrpc/svc.c#lines-815
The "umh" is used to be able to run userspace commands from kernel in container, e.g.: we use it to virtualize (run in ct) coredump collection, nfs upcall, and cgroup-v1 release-agent.
https://bitbucket.org/openvz/vzkernel/src/662c0172a9d4aecf52dbaea4f903ccc801b569b2/fs/coredump.c#lines-640
https://bitbucket.org/openvz/vzkernel/src/662c0172a9d4aecf52dbaea4f903ccc801b569b2/fs/nfsd/nfs4recover.c#lines-1849
https://bitbucket.org/openvz/vzkernel/src/662c0172a9d4aecf52dbaea4f903ccc801b569b2/kernel/cgroup/cgroup-v1.c#lines-930
And we really want those threads be restricted by the same cgroups as the container.
The commit you've mentioned is an interesting one, we can try to switch our custom kthreads to "vhost_task" similar to what kvm did. It's not obvious if it will fly until we try =)
>
>> Second patch adds information into dmesg to identify processes which
>> prevent cgroup from being frozen or just don't allow it to freeze fast
>> enough.
>
> I can see how this can be useful for debugging, however, it resembles
> the existing CONFIG_DETECT_HUNG_TASK and its
> kernel.hung_task_timeout_secs. Could that be used instead?
The hung_task_timeout_secs detects the hang task only if it's in D state and didn't schedule, but it's not the only case. For instance the module I used to test the problem with, is not detected by this mechanism (as it schedules) (https://github.com/Snorch/proc-hang-module).
Previously we saw tasks sleeping in nfs, presumably waiting for reply from server, which prevented freeze, and though hardlockup/softlockup/hang-task warnings were enabled they didn't trigger. Also one might want a separate timeout for freezer than for general hang cases (general hang timeouts should probably be higher).
>
> Thanks,
> Michal
--
Best regards, Pavel Tikhomirov
Senior Software Developer, Virtuozzo.
Powered by blists - more mailing lists