[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221103030933.840989-1-l3b2w1@gmail.com>
Date: Thu, 3 Nov 2022 11:09:33 +0800
From: Binglei Wang <l3b2w1@...il.com>
To: tj@...nel.org, jiangshanlai@...il.com
Cc: linux-kernel@...r.kernel.org, Binglei Wang <l3b2w1@...il.com>
Subject: [PATCH] workqueue: make workers threads stick to HK_TYPE_KTHREAD cpumask
From: Binglei Wang <l3b2w1@...il.com>
When new worker thread created, set its affinity to HK_TYPE_KTHREAD
cpumask.
When hotplug cpu online, rebind workers's affinity to HK_TYPE_KTHREAD
cpumask.
Make workers threads stick to HK_TYPE_KTHREAD cpumask all the time.
Signed-off-by: Binglei Wang <l3b2w1@...il.com>
---
kernel/workqueue.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7cd5f5e7e..77b303f5e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1928,6 +1928,7 @@ static struct worker *create_worker(struct worker_pool *pool)
struct worker *worker;
int id;
char id_buf[16];
+ const struct cupmask *cpumask = NULL;
/* ID is needed to determine kthread name */
id = ida_alloc(&pool->worker_ida, GFP_KERNEL);
@@ -1952,7 +1953,10 @@ static struct worker *create_worker(struct worker_pool *pool)
goto fail;
set_user_nice(worker->task, pool->attrs->nice);
- kthread_bind_mask(worker->task, pool->attrs->cpumask);
+
+ if (housekeeping_enabled(HK_TYPE_KTHREAD))
+ cpumask = housekeeping_cpumask(HK_TYPE_KTHREAD);
+ kthread_bind_mask(worker->task, cpumask ? cpumask : pool->attrs->cpumask);
/* successful, attach the worker to the pool */
worker_attach_to_pool(worker, pool);
@@ -5027,20 +5031,26 @@ static void unbind_workers(int cpu)
static void rebind_workers(struct worker_pool *pool)
{
struct worker *worker;
+ const struct cpumask *cpumask = NULL;
lockdep_assert_held(&wq_pool_attach_mutex);
+ if (housekeeping_enabled(HK_TYPE_KTHREAD))
+ cpumask = housekeeping_cpumask(HK_TYPE_KTHREAD);
+
/*
* Restore CPU affinity of all workers. As all idle workers should
* be on the run-queue of the associated CPU before any local
* wake-ups for concurrency management happen, restore CPU affinity
* of all workers first and then clear UNBOUND. As we're called
* from CPU_ONLINE, the following shouldn't fail.
+ *
+ * Also consider the housekeeping HK_TYPE_KTHREAD cpumask.
*/
for_each_pool_worker(worker, pool) {
kthread_set_per_cpu(worker->task, pool->cpu);
WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
- pool->attrs->cpumask) < 0);
+ cpumask ? cpumask : pool->attrs->cpumask) < 0);
}
raw_spin_lock_irq(&pool->lock);
--
2.27.0
Powered by blists - more mailing lists