[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZpAYGU7x6ioqBir5@slm.duckdns.org>
Date: Thu, 11 Jul 2024 07:36:25 -1000
From: Tejun Heo <tj@...nel.org>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: chenridong <chenridong@...wei.com>, martin.lau@...ux.dev,
ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev,
john.fastabend@...il.com, kpsingh@...nel.org, sdf@...gle.com,
haoluo@...gle.com, jolsa@...nel.org, lizefan.x@...edance.com,
hannes@...xchg.org, bpf@...r.kernel.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH -next] cgroup: Fix AA deadlock caused by
cgroup_bpf_release
Hello,
On Thu, Jul 11, 2024 at 03:52:34AM +0000, Roman Gushchin wrote:
> > The max_active of system_wq is WQ_DFL_ACTIVE(256). If all active works are
> > cgroup bpf release works, it will block smp_call_on_cpu work which enque
> > after cgroup bpf releases. So smp_call_on_cpu holding cpu_hotplug_lock will
> > wait for completion, but it can never get a completion because cgroup bpf
> > release works can not get cgroup_mutex and will never finish.
> > However, Placing the cgroup bpf release works on cgroup destroy will never
> > block smp_call_on_cpu work, which means loop is broken. Thus, it can solve
> > the problem.
>
> Tejun,
>
> do you have an opinion on this?
>
> If there are certain limitations from the cgroup side on what can be done
> in a generic work context, it would be nice to document (e.g. don't grab
> cgroup mutex), but I still struggle to understand what exactly is wrong
> with the blamed commit.
I think the general rule here is more "don't saturate system wqs" rather
than "don't grab cgroup_mutex from system_wq". system wqs are for misc
things which shouldn't create a large number of concurrent work items. If
something is going to generate 256+ concurrent work items, it should use its
own workqueue. We don't know what's in system wqs and can't expect its users
to police specific lock usages.
Another aspect is that the current WQ_DFL_ACTIVE is an arbitrary number I
came up with close to 15 years ago. Machine size has increased by multiple
times, if not an order of magnitude since then. So, "there can't be a
reasonable situation where 256 concurrency limit isn't enough" is most
likely not true anymore and the limits need to be pushed upward.
Thanks.
--
tejun
Powered by blists - more mailing lists