[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <kz6e3oadkmrl7elk6z765t2hgbcqbd2fxvb2673vbjflbjxqck@suy4p2mm7dvw>
Date: Mon, 9 Sep 2024 16:19:38 +0200
From: Michal Koutný <mkoutny@...e.com>
To: Chen Ridong <chenridong@...wei.com>
Cc: martin.lau@...ux.dev, ast@...nel.org, daniel@...earbox.net,
andrii@...nel.org, eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev,
john.fastabend@...il.com, kpsingh@...nel.org, sdf@...gle.com, haoluo@...gle.com,
jolsa@...nel.org, tj@...nel.org, lizefan.x@...edance.com, hannes@...xchg.org,
roman.gushchin@...ux.dev, bpf@...r.kernel.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/1] cgroup: fix deadlock caused by cgroup_mutex and
cpu_hotplug_lock
On Sat, Aug 17, 2024 at 09:33:34AM GMT, Chen Ridong <chenridong@...wei.com> wrote:
> The reason for this issue is cgroup_mutex and cpu_hotplug_lock are
> acquired in different tasks, which may lead to deadlock.
> It can lead to a deadlock through the following steps:
> 1. A large number of cpusets are deleted asynchronously, which puts a
> large number of cgroup_bpf_release works into system_wq. The max_active
> of system_wq is WQ_DFL_ACTIVE(256). Consequently, all active works are
> cgroup_bpf_release works, and many cgroup_bpf_release works will be put
> into inactive queue. As illustrated in the diagram, there are 256 (in
> the acvtive queue) + n (in the inactive queue) works.
> 2. Setting watchdog_thresh will hold cpu_hotplug_lock.read and put
> smp_call_on_cpu work into system_wq. However step 1 has already filled
> system_wq, 'sscs.work' is put into inactive queue. 'sscs.work' has
> to wait until the works that were put into the inacvtive queue earlier
> have executed (n cgroup_bpf_release), so it will be blocked for a while.
> 3. Cpu offline requires cpu_hotplug_lock.write, which is blocked by step 2.
> 4. Cpusets that were deleted at step 1 put cgroup_release works into
> cgroup_destroy_wq. They are competing to get cgroup_mutex all the time.
> When cgroup_metux is acqured by work at css_killed_work_fn, it will
> call cpuset_css_offline, which needs to acqure cpu_hotplug_lock.read.
> However, cpuset_css_offline will be blocked for step 3.
> 5. At this moment, there are 256 works in active queue that are
> cgroup_bpf_release, they are attempting to acquire cgroup_mutex, and as
> a result, all of them are blocked. Consequently, sscs.work can not be
> executed. Ultimately, this situation leads to four processes being
> blocked, forming a deadlock.
>
> system_wq(step1) WatchDog(step2) cpu offline(step3) cgroup_destroy_wq(step4)
> ...
> 2000+ cgroups deleted asyn
> 256 actives + n inactives
> __lockup_detector_reconfigure
> P(cpu_hotplug_lock.read)
> put sscs.work into system_wq
> 256 + n + 1(sscs.work)
> sscs.work wait to be executed
> warting sscs.work finish
> percpu_down_write
> P(cpu_hotplug_lock.write)
> ...blocking...
> css_killed_work_fn
> P(cgroup_mutex)
> cpuset_css_offline
> P(cpu_hotplug_lock.read)
> ...blocking...
> 256 cgroup_bpf_release
> mutex_lock(&cgroup_mutex);
> ..blocking...
Thanks, Ridong, for laying this out.
Let me try to extract the core of the deps above.
The correct lock ordering is: cgroup_mutex then cpu_hotplug_lock.
However, the smp_call_on_cpu() under cpus_read_lock may lead to
a deadlock (ABBA over those two locks).
This is OK
thread T system_wq worker
lock(cgroup_mutex) (II)
...
unlock(cgroup_mutex)
down(cpu_hotplug_lock.read)
smp_call_on_cpu
queue_work_on(cpu, system_wq, scss) (I)
scss.func
wait_for_completion(scss)
up(cpu_hotplug_lock.read)
However, there is no ordering between (I) and (II) so they can also happen
in opposite
thread T system_wq worker
down(cpu_hotplug_lock.read)
smp_call_on_cpu
queue_work_on(cpu, system_wq, scss) (I)
lock(cgroup_mutex) (II)
...
unlock(cgroup_mutex)
scss.func
wait_for_completion(scss)
up(cpu_hotplug_lock.read)
And here the thread T + system_wq worker effectively call
cpu_hotplug_lock and cgroup_mutex in the wrong order. (And since they're
two threads, it won't be caught by lockdep.)
By that reasoning any holder of cgroup_mutex on system_wq makes system
susceptible to a deadlock (in presence of cpu_hotplug_lock waiting
writers + cpuset operations). And the two work items must meet in same
worker's processing hence probability is low (zero?) with less than
WQ_DFL_ACTIVE items.
(And more generally, any lock that is ordered before cpu_hotplug_lock
should not be taken in system_wq work functions. Or at least such works
items should not saturate WQ_DFL_ACTIVE workers.)
Wrt other uses of cgroup_mutex, I only see
bpf_map_free_in_work
queue_work(system_unbound_wq)
bpf_map_free_deferred
ops->map_free == cgroup_storage_map_free
cgroup_lock()
which is safe since it uses a different workqueue than system_wq.
> To fix the problem, place cgroup_bpf_release works on cgroup_destroy_wq,
> which can break the loop and solve the problem.
Yes, it moves the problematic cgroup_mutex holder away from system_wq
and cgroup_destroy_wq could not cause similar problems because there are
no explicit waiter for particular work items or its flushing.
> System wqs are for misc things which shouldn't create a large number
> of concurrent work items. If something is going to generate
> >WQ_DFL_ACTIVE(256) concurrent work
> items, it should use its own dedicated workqueue.
Actually, I'm not sure (because I lack workqueue knowledge) if producing
less than WQ_DFL_ACTIVE work items completely eliminates the chance of
two offending work items producing the wrong lock ordering.
> Fixes: 4bfc0bb2c60e ("bpf: decouple the lifetime of cgroup_bpf from cgroup itself")
I'm now indifferent whether this is needed (perhaps in the sense it is
the _latest_ of multiple changes that contributed to possibility of this
deadlock scenario).
> Link: https://lore.kernel.org/cgroups/e90c32d2-2a85-4f28-9154-09c7d320cb60@huawei.com/T/#t
> Signed-off-by: Chen Ridong <chenridong@...wei.com>
> ---
> kernel/bpf/cgroup.c | 2 +-
> kernel/cgroup/cgroup-internal.h | 1 +
> kernel/cgroup/cgroup.c | 2 +-
> 3 files changed, 3 insertions(+), 2 deletions(-)
I have convinved myself now that you can put
Reviewed-by: Michal Koutný <mkoutny@...e.com>
Regards,
Michal
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists