[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161017193047.GC6248@htj.duckdns.org>
Date: Mon, 17 Oct 2016 15:30:47 -0400
From: Tejun Heo <tj@...nel.org>
To: Michael Ellerman <mpe@...erman.id.au>
Cc: torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
jiangshanlai@...il.com, akpm@...ux-foundation.org,
kernel-team@...com,
"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
Balbir Singh <bsingharora@...il.com>
Subject: Re: Oops on Power8 (was Re: [PATCH v2 1/7] workqueue: make workqueue
available early during boot)
Hello, Michael.
Other NUMA archs are lazy-initializing cpu to node mapping too, so we
need to fix it from workqueue side. This also means that we've been
getting NUMA node wrong for percpu pools on those archs.
Can you please try the following patch and if it resolves the issue,
report the workqueue part (it's at the end) of sysrq-t dump?
Thanks.
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 984f6ff..276557b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4411,14 +4411,14 @@ void show_workqueue_state(void)
break;
}
}
- if (idle)
- continue;
+ //if (idle)
+ // continue;
pr_info("workqueue %s: flags=0x%x\n", wq->name, wq->flags);
for_each_pwq(pwq, wq) {
spin_lock_irqsave(&pwq->pool->lock, flags);
- if (pwq->nr_active || !list_empty(&pwq->delayed_works))
+ //if (pwq->nr_active || !list_empty(&pwq->delayed_works))
show_pwq(pwq);
spin_unlock_irqrestore(&pwq->pool->lock, flags);
}
@@ -4429,8 +4429,8 @@ void show_workqueue_state(void)
bool first = true;
spin_lock_irqsave(&pool->lock, flags);
- if (pool->nr_workers == pool->nr_idle)
- goto next_pool;
+ //if (pool->nr_workers == pool->nr_idle)
+ // goto next_pool;
pr_info("pool %d:", pool->id);
pr_cont_pool_info(pool);
@@ -4649,10 +4649,12 @@ int workqueue_online_cpu(unsigned int cpu)
for_each_pool(pool, pi) {
mutex_lock(&pool->attach_mutex);
- if (pool->cpu == cpu)
+ if (pool->cpu == cpu) {
+ pool->node = cpu_to_node(cpu);
rebind_workers(pool);
- else if (pool->cpu < 0)
+ } else if (pool->cpu < 0) {
restore_unbound_workers_cpumask(pool, cpu);
+ }
mutex_unlock(&pool->attach_mutex);
}
@@ -5495,8 +5497,6 @@ int __init workqueue_init_early(void)
pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC);
- wq_numa_init();
-
/* initialize CPU pools */
for_each_possible_cpu(cpu) {
struct worker_pool *pool;
@@ -5571,6 +5571,9 @@ int __init workqueue_init(void)
struct worker_pool *pool;
int cpu, bkt;
+ wq_numa_init();
+ wq_update_unbound_numa(wq, smp_processor_id(), true);
+
/* create the initial workers */
for_each_online_cpu(cpu) {
for_each_cpu_worker_pool(pool, cpu) {
Powered by blists - more mailing lists