[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220804084135.92425-2-jiangshanlai@gmail.com>
Date: Thu, 4 Aug 2022 16:41:28 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Lai Jiangshan <jiangshan.ljs@...group.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Tejun Heo <tj@...nel.org>, Petr Mladek <pmladek@...e.com>,
Michal Hocko <mhocko@...e.com>,
Peter Zijlstra <peterz@...radead.org>,
Wedson Almeida Filho <wedsonaf@...gle.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Valentin Schneider <vschneid@...hat.com>
Subject: [RFC PATCH 1/8] workqueue: Unconditionally set cpumask in worker_attach_to_pool()
From: Lai Jiangshan <jiangshan.ljs@...group.com>
If a worker is spuriously woken up after kthread_bind_mask() but before
worker_attach_to_pool(), and there are some cpu-hot-[un]plug happening
during the same interval, the worker task might be pushed away from its
bound CPU with its affinity changed by the scheduler and worker_attach_to_pool()
doesn't rebind it properly.
Do unconditionally affinity binding in worker_attach_to_pool() to fix
the problem.
Prepare for moving worker_attach_to_pool() from create_worker() to the
starting of worker_thread() which will really cause the said interval
even without spurious wakeup.
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: Tejun Heo <tj@...nel.org>,
Cc: Petr Mladek <pmladek@...e.com>
Cc: Michal Hocko <mhocko@...e.com>,
Cc: Peter Zijlstra <peterz@...radead.org>,
Cc: Wedson Almeida Filho <wedsonaf@...gle.com>
Fixes: 640f17c82460 ("workqueue: Restrict affinity change to rescuer")
Signed-off-by: Lai Jiangshan <jiangshan.ljs@...group.com>
---
kernel/workqueue.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 1ea50f6be843..928aad7d6123 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1872,8 +1872,11 @@ static void worker_attach_to_pool(struct worker *worker,
else
kthread_set_per_cpu(worker->task, pool->cpu);
- if (worker->rescue_wq)
- set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
+ /*
+ * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
+ * online CPUs. It'll be re-applied when any of the CPUs come up.
+ */
+ set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
list_add_tail(&worker->node, &pool->workers);
worker->pool = pool;
--
2.19.1.6.gb485710b
Powered by blists - more mailing lists