[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZYTDz10ohj0kokum@mtj.duckdns.org>
Date: Fri, 22 Dec 2023 08:01:35 +0900
From: Tejun Heo <tj@...nel.org>
To: Lai Jiangshan <jiangshanlai@...il.com>
Cc: linux-kernel@...r.kernel.org, Naohiro.Aota@....com,
kernel-team@...a.com
Subject: Re: [PATCHSET wq/for-6.8] workqueue: Implement system-wide
max_active for unbound workqueues
Hello, Lai.
On Wed, Dec 20, 2023 at 05:20:18PM +0800, Lai Jiangshan wrote:
> The patchset seems complicated to me. For me, reverting a bit to the behavior
> of 636b927eba5b ("workqueue: Make unbound workqueues to use per-cpu
> pool_workqueues"), like the following code (untested, just for showing
> the idea),
> seems simpler.
>
> max_active will have the same behavior as before if the wq is configured
> with WQ_AFFN_NUMA. For WQ_AFFN_{CPU|SMT|CACHE}, the problem
> isn't fixed.
Yeah, it is complicated but the complications come from the fact that the
domain we count nr_active can't match the worker_pools, and that's because
unbound workqueue behavior is noticeably worse if we let them roam across L3
boundaries on modern processors with multiple chiplets or otherwise
segmented L3 caches.
We need WQ_AFFN_CACHE to behave well on these chips and max_active
enforcement breaks if we keep them tied to pool in such cases, so I'm afraid
our hands are tied here. The hardware has changed and we have to adapt to
it. In this case, that comes at the cost of extra complexity to divorce
max_active enforcement domain from worker_pool boundaries.
Thanks.
--
tejun
Powered by blists - more mailing lists