[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220802084146.3922640-2-vschneid@redhat.com>
Date: Tue, 2 Aug 2022 09:41:44 +0100
From: Valentin Schneider <vschneid@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: Tejun Heo <tj@...nel.org>, Lai Jiangshan <jiangshanlai@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Frederic Weisbecker <frederic@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Phil Auld <pauld@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>
Subject: [RFC PATCH v3 1/3] workqueue: Hold wq_pool_mutex while affining tasks to wq_unbound_cpumask
When unbind_workers() reads wq_unbound_cpumask to set the affinity of
freshly-unbound kworkers, it only holds wq_pool_attach_mutex. This isn't
sufficient as wq_unbound_cpumask is only protected by wq_pool_mutex.
This is made more obvious as of recent commit
46a4d679ef88 ("workqueue: Avoid a false warning in unbind_workers()")
e.g.
unbind_workers() workqueue_set_unbound_cpumask()
kthread_set_per_cpu(p, -1);
if (cpumask_intersects(wq_unbound_cpumask, cpu_active_mask))
cpumask_copy(wq_unbound_cpumask, cpumask);
WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, wq_unbound_cpumask) < 0);
Make workqueue_offline_cpu() invoke unbind_workers() with wq_pool_mutex
held.
Fixes: 10a5a651e3af ("workqueue: Restrict kworker in the offline CPU pool running on housekeeping CPUs")
Signed-off-by: Valentin Schneider <vschneid@...hat.com>
---
kernel/workqueue.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index aa8a82bc6738..97cc41430a76 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5143,14 +5143,15 @@ int workqueue_offline_cpu(unsigned int cpu)
if (WARN_ON(cpu != smp_processor_id()))
return -1;
+ mutex_lock(&wq_pool_mutex);
+
unbind_workers(cpu);
/* update NUMA affinity of unbound workqueues */
- mutex_lock(&wq_pool_mutex);
list_for_each_entry(wq, &workqueues, list)
wq_update_unbound_numa(wq, cpu, false);
- mutex_unlock(&wq_pool_mutex);
+ mutex_unlock(&wq_pool_mutex);
return 0;
}
--
2.31.1
Powered by blists - more mailing lists