[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201015110923.605880079@infradead.org>
Date: Thu, 15 Oct 2020 13:05:37 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: tglx@...utronix.de, mingo@...nel.org
Cc: linux-kernel@...r.kernel.org, bigeasy@...utronix.de,
qais.yousef@....com, swood@...hat.com, peterz@...radead.org,
valentin.schneider@....com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vincent.donnefort@....com, tj@...nel.org,
ouwen210@...mail.com
Subject: [PATCH v3 05/19] workqueue: Manually break affinity on hotplug
Don't rely on the scheduler to force break affinity for us -- it will
stop doing that for per-cpu-kthreads.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Tejun Heo <tj@...nel.org>
---
kernel/workqueue.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4905,6 +4905,10 @@ static void unbind_workers(int cpu)
pool->flags |= POOL_DISASSOCIATED;
raw_spin_unlock_irq(&pool->lock);
+
+ for_each_pool_worker(worker, pool)
+ WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
+
mutex_unlock(&wq_pool_attach_mutex);
/*
Powered by blists - more mailing lists