[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160816154205.GE9516@htj.duckdns.org>
Date: Tue, 16 Aug 2016 11:42:05 -0400
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Heiko Carstens <heiko.carstens@...ibm.com>,
Ming Lei <tom.leiming@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Lai Jiangshan <laijs@...fujitsu.com>,
Michael Holzheu <holzheu@...ux.vnet.ibm.com>,
Martin Schwidefsky <schwidefsky@...ibm.com>
Subject: Re: [bisected] "sched: Allow per-cpu kernel threads to run on online
&& !active" causes warning
Hello, Peter.
On Tue, Aug 16, 2016 at 05:29:49PM +0200, Peter Zijlstra wrote:
> On Tue, Aug 16, 2016 at 11:20:27AM -0400, Tejun Heo wrote:
> > As long as the mapping doesn't change after the first onlining of the
> > CPU, the workqueue side shouldn't be too difficult to fix up. I'll
> > look into it. For memory allocations, as long as the cpu <-> node
> > mapping is established before any memory allocation for the cpu takes
> > place, it should be fine too, I think.
>
> Don't we allocate per-cpu memory for 'cpu_possible_map' on boot? There's
> a whole bunch of per-cpu memory users that does things like:
>
>
> for_each_possible_cpu(cpu) {
> struct foo *foo = per_cpu_ptr(&per_cpu_var, cpu);
>
> /* muck with foo */
> }
>
>
> Which requires a cpu->node map for all possible cpus at boot time.
Ah, right. If cpu -> node mapping is dynamic, there isn't much that
we can do about allocating per-cpu memory on the wrong node. And it
is problematic that percpu allocations can race against an onlining
CPU switching its node association.
One way to keep the mapping stable would be reserving per-node
possible CPU slots so that the CPU number assigned to a new CPU is on
the right node. It'd be a simple solution but would get really
expensive with increasing number of nodes.
Heiko, do you have any ideas?
Thanks.
--
tejun
Powered by blists - more mailing lists