[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <541866F2.4020108@sr71.net>
Date: Tue, 16 Sep 2014 09:36:02 -0700
From: Dave Hansen <dave@...1.net>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>
CC: Chuck Ebbert <cebbert.lkml@...il.com>,
linux-kernel@...r.kernel.org, borislav.petkov@....com,
andreas.herrmann3@....com, hpa@...ux.intel.com, ak@...ux.intel.com
Subject: Re: [PATCH] x86: Consider multiple nodes in a single socket to be
"sane"
On 09/16/2014 08:59 AM, Peter Zijlstra wrote:
> On Tue, Sep 16, 2014 at 08:44:03AM +0200, Ingo Molnar wrote:
>> Note that that's not really a 'NUMA node' in the way lots of
>> places in the kernel assume it: permanent placement assymetry
>> (and access cost assymetry) of RAM.
>
> Agreed, that is not NUMA, both groups will have the exact same local
> DRAM latency (unlike the AMD thing which has two memory busses on the
> single package, and therefore really has two nodes on a single chip).
I don't think this is correct.
>From my testing, each ring of CPUs has a "close" and "far" memory
controller in the socket.
> This also means the CoD thing sets up the NUMA masks incorrectly.
I used this publicly-available Intel tool:
https://software.intel.com/en-us/articles/intelr-memory-latency-checker
And ran various combinations pinning the latency checker to various CPUs
and NUMA nodes.
Here's what I think the SLIT table should look like with cluster-on-die
disabled. There is one node per socket and the latency to the other
node is 1.5x the latency to the local node:
* 0 1
0 10 15
1 15 10
or, measured in ns:
* 0 1
0 76 119
1 114 76
Enabling cluster-on-die, we get 4 nodes. The local memory in thesame
socket gets faster, and remote memory in the same socket gets both
absolutely and relatively slower:
* 0 1 2 3
0 10 20 26 26
1 20 10 26 26
2 26 26 10 20
3 26 26 20 10
and in ns:
* 0 1 2 3
0 74.8 152.3 190.6 200.4
1 146.2 75.6 190.8 200.6
2 185.1 195.5 74.5 150.1
3 186.6 195.6 147.3 75.6
So I think it really is reasonable to say that there are 2 NUMA nodes in
a socket.
BTW, these numbers are only approximate. They were not run under
particularly controlled conditions and I don't even remember what kernel
they were under.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists