[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201214155457.3430-4-jiangshanlai@gmail.com>
Date: Mon, 14 Dec 2020 23:54:50 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Lai Jiangshan <laijs@...ux.alibaba.com>, Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Valentin Schneider <valentin.schneider@....com>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: [PATCH 03/10] workqueue: Manually break affinity on pool detachment
From: Lai Jiangshan <laijs@...ux.alibaba.com>
Don't rely on the scheduler to force break affinity for us -- it will
stop doing that for per-cpu-kthreads.
Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
---
kernel/workqueue.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 1f5b8385c0cf..1f6cb83e0bc5 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1885,6 +1885,16 @@ static void worker_detach_from_pool(struct worker *worker)
if (list_empty(&pool->workers))
detach_completion = pool->detach_completion;
+
+ /*
+ * The cpus of pool->attrs->cpumask might all go offline after
+ * detachment, and the scheduler may not force break affinity
+ * for us, so we do it on our own and unbind this worker which
+ * can't be unbound by workqueue_offline_cpu() since it doesn't
+ * belong to any pool after it.
+ */
+ set_cpus_allowed_ptr(worker->task, cpu_possible_mask);
+
mutex_unlock(&wq_pool_attach_mutex);
/* clear leftover flags without pool->lock after it is detached */
--
2.19.1.6.gb485710b
Powered by blists - more mailing lists