lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZbpElS5sQV_o9NG1@localhost.localdomain>
Date: Wed, 31 Jan 2024 14:01:09 +0100
From: Juri Lelli <juri.lelli@...hat.com>
To: Waiman Long <longman@...hat.com>
Cc: Tejun Heo <tj@...nel.org>, Lai Jiangshan <jiangshanlai@...il.com>,
	linux-kernel@...r.kernel.org, Cestmir Kalina <ckalina@...hat.com>,
	Alex Gladkov <agladkov@...hat.com>
Subject: Re: [RFC PATCH 0/3] workqueue: Enable unbound cpumask update on
 ordered workqueues

Hi Waiman,

Thanks for working on this!

On 30/01/24 13:33, Waiman Long wrote:
> Ordered workqueues does not currently follow changes made to the
> global unbound cpumask because per-pool workqueue changes may break
> the ordering guarantee. IOW, a work function in an ordered workqueue
> may run on a cpuset isolated CPU.
> 
> This series enables ordered workqueues to follow changes made to the
> global unbound cpumask by temporaily saving the work items in an
> internal queue until the old pwq has been properly flushed and to be
> freed. At that point, those work items, if present, are queued back to
> the new pwq to be executed.

I took it for a quick first spin (on top of wq/for-6.9) and this is what
I'm seeing.

Let's take edac-poller ordered wq, as the behavior seems to be the same
for the rest.

Initially we have (using wq_dump.py)

wq_unbound_cpumask=0xffffffff 000000ff
..
pool[80] ref= 44 nice=  0 idle/workers=  2/  2 cpus=0xffffffff 000000ff pod_cpus=0xffffffff 000000ff
..
edac-poller                      ordered    80 80 80 80 80 80 80 80 ...
..
edac-poller                      0xffffffff 000000ff    345 0xffffffff 000000ff

after I

# echo 3 >/sys/devices/virtual/workqueue/cpumask

I get

wq_unbound_cpumask=00000003
..
pool[86] ref= 44 nice=  0 idle/workers=  2/  2 cpus=00000003 pod_cpus=00000003
..
edac-poller                      ordered    86 86 86 86 86 86 86 86 86 86 ...
..
edac-poller                      0xffffffff 000000ff    345 0xffffffff 000000ff

So, IIUC, the pool and wq -> pool settings are updated correctly, but
the wq.unbound_cpus (and its associated rescure affinity) are left
untouched. Is this expected or are we maybe still missing an additional
step?

Best,
Juri


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ