lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251117030913.3084-2-jiangshanlai@gmail.com>
Date: Mon, 17 Nov 2025 11:09:11 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Lai Jiangshan <jiangshan.ljs@...group.com>,
	Juri Lelli <juri.lelli@...hat.com>,
	Waiman Long <longman@...hat.com>,
	Tejun Heo <tj@...nel.org>,
	Lai Jiangshan <jiangshanlai@...il.com>
Subject: [PATCH 1/3] workqueue: Update the rescuer's affinity only when it is detached

From: Lai Jiangshan <jiangshan.ljs@...group.com>

When a rescuer is attached to a pool, its affinity should be only
managed by the pool.

But updating the detached rescuer's affinity is still meaningful so
that it will not disrupt isolated CPUs when it is to be waken up.

But the commit d64f2fa064f8 ("kernel/workqueue: Let rescuers follow
unbound wq cpumask changes") updates the affinity unconditionally, and
causes some issues

1) it also changes the affinity when the rescuer is already attached to
   a pool, which violates the affinity management.

2) the said commit tries to update the affinity of the rescuers, but it
   misses the rescuers of the PERCPU workqueues, and isolated CPUs can
   be possibly disrupted by these rescuers when they are summoned.

3) The affinity to set to the rescuers should be consistent in all paths
   when a rescuer is in detached state. The affinity could be either
   wq_unbound_cpumask or unbound_effective_cpumask(wq). Related paths:
       rescuer's worker_detach_from_pool()
       update wq_unbound_cpumask
       update wq's cpumask
       init_rescuer()
   Both affinities are Ok as long as they are consistent in all paths.
   But using unbound_effective_cpumask(wq) requres much more code to
   maintain the consistency, and it doesn't make much sense since the
   affinity is only effective when the rescuer is not processing works.
   wq_unbound_cpumask is more favorable.

Fix the 1) issue by testing rescuer->pool before updating with
wq_pool_attach_mutex held.

Fix the 2) issue by moving the rescuer's affinity updating code to
the place updating wq_unbound_cpumask and make it also update for
PERCPU workqueues.

Partially cleanup the 3) consistency issue by using wq_unbound_cpumask.
So that the path of "updating wq's cpumask" doesn't need to maintain it.
and both the paths of "updating wq_unbound_cpumask" and "rescuer's
worker_detach_from_pool()" use wq_unbound_cpumask.

Cleanup for init_rescuer()'s consistency for affinity can be done in
future.

Cc: Juri Lelli <juri.lelli@...hat.com>
Cc: Waiman Long <longman@...hat.com>
Signed-off-by: Lai Jiangshan <jiangshan.ljs@...group.com>
---
 kernel/workqueue.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index af182a19a8b1..9da679c621dc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5411,11 +5411,6 @@ static void apply_wqattrs_commit(struct apply_wqattrs_ctx *ctx)
 	/* update node_nr_active->max */
 	wq_update_node_max_active(ctx->wq, -1);
 
-	/* rescuer needs to respect wq cpumask changes */
-	if (ctx->wq->rescuer)
-		set_cpus_allowed_ptr(ctx->wq->rescuer->task,
-				     unbound_effective_cpumask(ctx->wq));
-
 	mutex_unlock(&ctx->wq->mutex);
 }
 
@@ -6974,6 +6969,11 @@ static int workqueue_apply_unbound_cpumask(const cpumask_var_t unbound_cpumask)
 	if (!ret) {
 		mutex_lock(&wq_pool_attach_mutex);
 		cpumask_copy(wq_unbound_cpumask, unbound_cpumask);
+		/* rescuer needs to respect cpumask changes when it is not attached */
+		list_for_each_entry(wq, &workqueues, list) {
+			if (wq->rescuer && !wq->rescuer->pool)
+				unbind_worker(wq->rescuer);
+		}
 		mutex_unlock(&wq_pool_attach_mutex);
 	}
 	return ret;
-- 
2.19.1.6.gb485710b


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ