[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aQt5Sw6aEkLxYIYQ@slm.duckdns.org>
Date: Wed, 5 Nov 2025 06:20:27 -1000
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Calvin Owens <calvin@...nvd.org>, linux-kernel@...r.kernel.org,
Dan Schatzberg <dschatzberg@...a.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Koutný <mkoutny@...e.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Clark Williams <clrkwllms@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>, cgroups@...r.kernel.org,
linux-rt-devel@...ts.linux.dev
Subject: Re: [PATCH cgroup/for-6.19 1/2] cgroup: Convert css_set_lock from
spinlock_t to raw_spinlock_t
Hello,
On Wed, Nov 05, 2025 at 09:50:36AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 04, 2025 at 09:32:24AM -1000, Tejun Heo wrote:
> > Convert css_set_lock from spinlock_t to raw_spinlock_t to address RT-related
> > scheduling constraints. cgroup_task_dead() is called from finish_task_switch()
> > which cannot schedule even in PREEMPT_RT kernels, requiring css_set_lock to be
> > a raw spinlock to avoid sleeping in a non-preemptible context.
>
> The constraint for doing so, is that each critical section is actually
> bounded in time. The below seem to contain list iteration. I'm thinking
> it is unbounded since userspace is on control of the cgroup hierarchy.
Right, along with the problems Sebastian pointed out, doesn't look like this
is the way to go. This doesn't need to happen in line. It just needs to
happen after the last task switch. I'll bounce it out and run it
asynchronously after the rq lock is dropped.
Thanks.
--
tejun
Powered by blists - more mailing lists