[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201218170919.2950-4-jiangshanlai@gmail.com>
Date: Sat, 19 Dec 2020 01:09:12 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Valentin Schneider <valentin.schneider@....com>,
Peter Zijlstra <peterz@...radead.org>,
Qian Cai <cai@...hat.com>,
Vincent Donnefort <vincent.donnefort@....com>,
Lai Jiangshan <laijs@...ux.alibaba.com>,
Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: [PATCH -tip V2 03/10] workqueue: Manually break affinity on pool detachment
From: Lai Jiangshan <laijs@...ux.alibaba.com>
The pool->attrs->cpumask might be a single CPU and it may go
down after detachment, and the scheduler won't force to break
affinity for us since it is a per-cpu-ktrhead. So we have to
do it on our own and unbind this worker which can't be unbound
by workqueue_offline_cpu() since it doesn't belong to any pool
after detachment. Do it unconditionally for there is no harm
to break affinity for non-per-cpu-ktrhead and we don't need to
rely on the scheduler's policy on when to break affinity.
Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug")
Acked-by: Tejun Heo <tj@...nel.org>
Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
---
kernel/workqueue.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index fa71520822f0..4d7575311198 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1885,6 +1885,19 @@ static void worker_detach_from_pool(struct worker *worker)
if (list_empty(&pool->workers))
detach_completion = pool->detach_completion;
+
+ /*
+ * The pool->attrs->cpumask might be a single CPU and it may go
+ * down after detachment, and the scheduler won't force to break
+ * affinity for us since it is a per-cpu-ktrhead. So we have to
+ * do it on our own and unbind this worker which can't be unbound
+ * by workqueue_offline_cpu() since it doesn't belong to any pool
+ * after detachment. Do it unconditionally for there is no harm
+ * to break affinity for non-per-cpu-ktrhead and we don't need to
+ * rely on the scheduler's policy on when to break affinity.
+ */
+ set_cpus_allowed_ptr(worker->task, cpu_possible_mask);
+
mutex_unlock(&wq_pool_attach_mutex);
/* clear leftover flags without pool->lock after it is detached */
--
2.19.1.6.gb485710b
Powered by blists - more mailing lists