[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f64n5c6dkdjuaudk5p66mvpjyjulrjytmndqufmdu3uhft46sy@bem2gx34zhkz>
Date: Fri, 15 Aug 2025 14:34:03 +0200
From: Michal Koutný <mkoutny@...e.com>
To: lirongqing <lirongqing@...du.com>
Cc: tj@...nel.org, hannes@...xchg.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] cgroup: Remove redundant rcu_read_lock() in
spin_lock_irq() section
Hello RongQing.
On Fri, Aug 15, 2025 at 05:14:30PM +0800, lirongqing <lirongqing@...du.com> wrote:
> From: Li RongQing <lirongqing@...du.com>
>
> Since spin_lock_irq() already disables preemption and task_css_set()
> is protected by css_set_lock, the rcu_read_lock() calls are unnecessary
> within the critical section. Remove them to simplify the code.
>
> Signed-off-by: Li RongQing <lirongqing@...du.com>
So there is some inconsistency betwen cgroup_migrate() and
cgroup_attach_task() (see also 674b745e22b3c ("cgroup: remove
rcu_read_lock()/rcu_read_unlock() in critical section of
spin_lock_irq()")) -- that'd warrant unification. Have you spotted other
instances of this?
The RCU lock is there not only because of task_css_set() but also for
while_each_thread(). I'd slightly prefer honoring the advice from Paul
[1] and keep a redundant rcu_read_lock() -- for more robustness to
reworks, I'm not convinced this simplification has othe visible
benefits.
Thanks,
Michal
[1] https://lore.kernel.org/all/20220107213612.GQ4202@paulmck-ThinkPad-P17-Gen-1/
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists