[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8e66cf1a-09e2-4261-b2b4-6a5e608b9ec7@kernel.org>
Date: Tue, 3 Feb 2026 15:34:22 -0500
From: Chuck Lever <cel@...nel.org>
To: Tejun Heo <tj@...nel.org>
Cc: jiangshanlai@...il.com, linux-kernel@...r.kernel.org,
Chuck Lever <chuck.lever@...cle.com>
Subject: Re: [RFC PATCH] workqueue: Automatic affinity scope fallback for
single-pod topologies
On 2/3/26 3:29 PM, Tejun Heo wrote:
> On Tue, Feb 03, 2026 at 03:14:46PM -0500, Chuck Lever wrote:
>>> While I understand the problem, I don't think dropping down to core boundary
>>> for unbound workqueues by default makes sense. That may help with some use
>>> cases but cause problem with others.
>>
>> I've never seen a case where it doesn't help. In order to craft an
>> alternative, I'll need to have some examples to avoid. Is it only the
>> SMT case that is concerning?
>
> It's just a lot of separate pools on large machines. If you have relatively
> high concurrency, the number of workers can go pretty high. They'd also
> migrate back and forth more depending on usage pattern and have worse cache
> locality. Imagine you have a bursty workload wandering through the system,
> if you have nr_cores pools, it can easily end up with kworkers > nr_cores *
> max_concurrency.
The patch addresses that, I'd hope, by only switching to per-CPU on
single pod (ie, simple) systems. Larger, more complicated, topologies
should be left unchanged. I imagine that on a single pod machine with a
large number of cores, having per-CPU locking will nearly always be a
win.
--
Chuck Lever
Powered by blists - more mailing lists