[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140918073224.GC24842@nazgul.tnic>
Date: Thu, 18 Sep 2014 09:32:24 +0200
From: Borislav Petkov <bp@...en8.de>
To: Dave Hansen <dave@...1.net>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, mingo@...nel.org,
hpa@...ux.intel.com, ak@...ux.intel.com,
Alex Chiang <achiang@...com>, Borislav Petkov <bp@...e.de>,
Rusty Russell <rusty@...tcorp.com.au>,
Mike Travis <travis@....com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH] x86: Consider multiple nodes in a single socket to be
"sane"
On Wed, Sep 17, 2014 at 02:55:26PM +0200, Borislav Petkov wrote:
> This sounds misleading to me. If I would have to explain how I
> understand physical_package_id, I'd say it is the physical piece of
> silicon containing the core. Which is consistent with what Peter says
> that using it to identify NUMA nodes is wrong.
>
> Btw, I'm trying to get on an AMD MCM box to dump those fields but it is
> kinda hard currently. Will report back once I have something...
Ok, so Brice sent me some AMD MCM data from a 4-socket box.
physical_package_id there really denotes the physical package, i.e.
silicon, i.e. physical socket which contains a core.
So on a 4-socket, 16 cores on each machine, you have:
physical_package_id
cpu0-15 : 0
cpu16-31: 1
cpu32-47: 2
cpu48-63: 3
which all looks nicely regular and clean.
core_siblings mirrors exactly that too, so you don't see the internal
nodes from that either, i.e:
cpu0/topology/core_siblings_list:0-15
...
cpu16/topology/core_siblings_list:16-31
...
cpu32/topology/core_siblings_list:32-47
...
cpu50/topology/core_siblings_list:48-63
--
Regards/Gruss,
Boris.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists