[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110302154215.GN3319@htj.dyndns.org>
Date: Wed, 2 Mar 2011 16:42:15 +0100
From: Tejun Heo <tj@...nel.org>
To: David Rientjes <rientjes@...gle.com>
Cc: Yinghai Lu <yinghai@...nel.org>, Ingo Molnar <mingo@...e.hu>,
tglx@...utronix.de, "H. Peter Anvin" <hpa@...or.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH x86/mm UPDATED] x86-64, NUMA: Fix distance table
handling
Hey,
On Wed, Mar 02, 2011 at 06:30:59AM -0800, David Rientjes wrote:
> Acked-by: David Rientjes <rientjes@...gle.com>
>
> There's also this in numa_emulation() that isn't a safe assumption:
>
> /* make sure all emulated nodes are mapped to a physical node */
> for (i = 0; i < ARRAY_SIZE(emu_nid_to_phys); i++)
> if (emu_nid_to_phys[i] == NUMA_NO_NODE)
> emu_nid_to_phys[i] = 0;
>
> Node id 0 is not always online depending on how you setup your SRAT. I'm
> not sure why emu_nid_to_phys[] would ever map a fake node id that doesn't
> exist to a physical node id rather than NUMA_NO_NODE, so I think it can
> just be removed. Otherwise, it should be mapped to a physical node id
> that is known to be online.
Unless I screwed up, that behavior isn't new. It just put in a
different form. Looking through the code... Okay, I think node 0
always exists. SRAT PXM isn't used as node number directly. It goes
through acpi_map_pxm_to_node() which allocates nids from 0 up.
amdtopology also guarantees the existence of node 0, so I think we're
in the safe and that probably is the reason why we had the above
behavior in the first place.
IIRC, there are other places which assume the existence of node 0.
Whether it's a good idea or not, I'm not sure but requring node 0 to
be always allocated doesn't sound too wrong to me. Maybe we can add
BUG_ON() if node 0 is offline somewhere.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists