[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZJNMk9oSp1_IYXLU@slm.duckdns.org>
Date: Wed, 21 Jun 2023 09:16:35 -1000
From: Tejun Heo <tj@...nel.org>
To: Pin-yen Lin <treapking@...omium.org>
Cc: Brian Norris <briannorris@...omium.org>, jiangshanlai@...il.com,
torvalds@...ux-foundation.org, peterz@...radead.org,
linux-kernel@...r.kernel.org, kernel-team@...a.com,
joshdon@...gle.com, brho@...gle.com, nhuck@...gle.com,
agk@...hat.com, snitzer@...nel.org, void@...ifault.com
Subject: Re: [PATCHSET v1 wq/for-6.5] workqueue: Improve unbound workqueue
execution locality
Hello, Pin-yen.
On Tue, Jun 13, 2023 at 05:26:48PM +0800, Pin-yen Lin wrote:
...
> > 1. affinity_scope = cache, affinity_strict = 1
> >
> > 2. affinity_scope = cpu, affinity_strict = 0
> >
> > 3. affinity_scope = cpu, affinity_strict = 1
>
> I pulled down v2 series and tried these settings on our 5.15 kernel.
> Unfortunately none of them showed significant improvement on the
> throughput. It's hard to tell which one is the best because of the
> noise, but the throughput is still all far from our 4.19 kernel or
> simply pinning everything to a single core.
>
> All the 4 settings (3 settings listed above plus the default) yields
> results between 90 to 120 Mbps, while pinning tasks to a single core
> consistently reaches >250 Mbps.
I find that perplexing given that switching to a per-cpu workqueue remedies
the situation quite a bit, which is how this patchset came to be. #3 is the
same as per-cpu workqueue, so if you're seeing noticeably different
performance numbers between #3 and per-cpu workqueue, there's something
wrong with either the code or test setup.
Also, if you have to ping to a single or some subset of CPUs, you can just
set WQ_SYSFS for the workqueue and set affinities in its sysfs interface
instead of hard-coding the workaround for a specific hardware.
Thanks.
--
tejun
Powered by blists - more mailing lists