[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20150708040056.948A1140770@ozlabs.org>
Date: Wed, 8 Jul 2015 14:00:56 +1000 (AEST)
From: Michael Ellerman <mpe@...erman.id.au>
To: Nishanth Aravamudan <nacc@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, Paul Mackerras <paulus@...ba.org>,
Anton Blanchard <anton@...ba.org>,
David Rientjes <rientjes@...gle.com>,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [RFC,1/2] powerpc/numa: fix cpu_to_node() usage during boot
On Thu, 2015-02-07 at 23:02:02 UTC, Nishanth Aravamudan wrote:
> Much like on x86, now that powerpc is using USE_PERCPU_NUMA_NODE_ID, we
> have an ordering issue during boot with early calls to cpu_to_node().
"now that .." implies we changed something and broke this. What commit was
it that changed the behaviour?
> The value returned by those calls now depend on the per-cpu area being
> setup, but that is not guaranteed to be the case during boot. Instead,
> we need to add an early_cpu_to_node() which doesn't use the per-CPU area
> and call that from certain spots that are known to invoke cpu_to_node()
> before the per-CPU areas are not configured.
>
> On an example 2-node NUMA system with the following topology:
>
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3
> node 0 size: 2029 MB
> node 0 free: 1753 MB
> node 1 cpus: 4 5 6 7
> node 1 size: 2045 MB
> node 1 free: 1945 MB
> node distances:
> node 0 1
> 0: 10 40
> 1: 40 10
>
> we currently emit at boot:
>
> [ 0.000000] pcpu-alloc: [0] 0 1 2 3 [0] 4 5 6 7
>
> After this commit, we correctly emit:
>
> [ 0.000000] pcpu-alloc: [0] 0 1 2 3 [1] 4 5 6 7
So it looks fairly sane, and I guess it's a bug fix.
But I'm a bit reluctant to put it in straight away without some time in next.
It looks like the symptom is that the per-cpu areas are all allocated on node
0, is that all that goes wrong?
cheers
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists