[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZIfY5zhhHU9IgOqx@slm.duckdns.org>
Date: Mon, 12 Jun 2023 16:48:07 -1000
From: Tejun Heo <tj@...nel.org>
To: Brian Norris <briannorris@...omium.org>
Cc: jiangshanlai@...il.com, torvalds@...ux-foundation.org,
peterz@...radead.org, linux-kernel@...r.kernel.org,
kernel-team@...a.com, joshdon@...gle.com, brho@...gle.com,
nhuck@...gle.com, agk@...hat.com, snitzer@...nel.org,
void@...ifault.com, treapking@...omium.org
Subject: Re: [PATCHSET v1 wq/for-6.5] workqueue: Improve unbound workqueue
execution locality
Hello,
On Mon, Jun 12, 2023 at 04:56:06PM -0700, Brian Norris wrote:
> Thanks for the CC; my colleague tried out your patches (ported to 5.15
> with some minor difficulty), and aside from some crashes (already noted
> by others, although we didn't pull the proposed v2 fixes), he didn't
Yeah, there were a few subtle bugs that v2 fixes.
> notice a significant change in performance on our particular test system
> and WiFi-throughput workload. I don't think we expected a lot though,
> per the discussion at:
>
> https://lore.kernel.org/all/ZFvpJb9Dh0FCkLQA@google.com/
That's disappointing. I was actually expecting that the default behavior
would restrain migrations across L3 boundaries strong enough to make a
meaningful difference. Can you enable WQ_SYSFS and test the following
configs?
1. affinity_scope = cache, affinity_strict = 1
2. affinity_scope = cpu, affinity_strict = 0
3. affinity_scope = cpu, affinity_strict = 1
#3 basically turns it into a percpu workqueue, so it should perform more or
less the same as a percpu workqueue without affecting everyone else.
Any chance you can post the toplogy details on the affected setup? How are
the caches and cores laid out?
Thanks.
--
tejun
Powered by blists - more mailing lists