[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170823142421.GK491396@devbig577.frc2.facebook.com>
Date: Wed, 23 Aug 2017 07:24:22 -0700
From: Tejun Heo <tj@...nel.org>
To: Geert Uytterhoeven <geert@...ux-m68k.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Michael Bringmann <mwb@...ux.vnet.ibm.com>
Subject: Re: [GIT PULL] workqueue fixes for v4.13-rc3
Hello, Geert.
Something is really fishy.
On Wed, Aug 23, 2017 at 10:10:54AM +0200, Geert Uytterhoeven wrote:
> > + pr_warn_once("WARNING: workqueue empty cpumask: node=%d cpu_going_down=%d cpumask=%*pb online=%*pb possible=%*pb\n",
> > + node, cpu_going_down, cpumask_pr_args(attrs->cpumask),
> > + cpumask_pr_args(cpumask_of_node(node)),
> > + cpumask_pr_args(wq_numa_possible_cpumask[node]));
>
> WARNING: workqueue empty cpumask: node=1 cpu_going_down=-1 cpumask=1
> online=1 possible=0
So, somehow cpu0 seems to be associated with node 1 instead of 0. It
seems highly unlikely but does the system actually have multiple NUMA
nodes?
> > @@ -5526,6 +5528,9 @@ static void __init wq_numa_init(void)
> >
> > wq_numa_possible_cpumask = tbl;
> > wq_numa_enabled = true;
> > +
> > + for_each_node(node)
> > + printk("XXX wq node[%d] %*pb\n", node, cpumask_pr_args(wq_numa_possible_cpumask[node]));
>
> XXX wq node[0] 1
> XXX wq node[1] 0
> XXX wq node[2] 0
> XXX wq node[3] 0
> XXX wq node[4] 0
> XXX wq node[5] 0
> XXX wq node[6] 0
> XXX wq node[7] 0
No idea why num_possible_cpus() is 8 on a non-SMP system but the
problem is that, during boot while wq_numa_init() was running, cpu0
reported that it's associated with node 0, but later it reports that
it's associated node 1. It looks like NUMA setup is screwed up.
Thanks.
--
tejun
Powered by blists - more mailing lists