[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <541C72A9.9090509@sr71.net>
Date: Fri, 19 Sep 2014 11:15:05 -0700
From: Dave Hansen <dave@...1.net>
To: Karel Zak <kzak@...hat.com>
CC: linux-kernel@...r.kernel.org, dave.hansen@...ux.intel.com,
a.p.zijlstra@...llo.nl, mingo@...nel.org, hpa@...ux.intel.com,
brice.goglin@...il.com, bp@...en8.de
Subject: Re: [PATCH] x86: new topology for multi-NUMA-node CPUs
On 09/19/2014 04:45 AM, Karel Zak wrote:
> hmm... it would be also nice to test it with lscpu(1) from
> util-linux (but it uses maps rather than lists from cpu*/topology/).
Here's the output with and with out Cluster-on-Die enabled.
Everything looks OK to me. The cache size changes are what the CPU
actually tells us through CPUID leaves.
[root@...-grantley-03 ~]# diff -ru lscpu.nocod lscpu.wcod
--- lscpu.nocod 2014-09-19 04:01:17.846336595 -0700
+++ lscpu.wcod 2014-09-19 04:10:56.557383761 -0700
@@ -6,18 +6,20 @@
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
-NUMA node(s): 2
+NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
Stepping: 2
-CPU MHz: 1340.468
-BogoMIPS: 4590.53
+CPU MHz: 1360.234
+BogoMIPS: 4590.67
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
-L3 cache: 46080K
-NUMA node0 CPU(s): 0-17,36-53
-NUMA node1 CPU(s): 18-35,54-71
+L3 cache: 23040K
+NUMA node0 CPU(s): 0-8,36-44
+NUMA node1 CPU(s): 9-17,45-53
+NUMA node2 CPU(s): 18-26,54-62
+NUMA node3 CPU(s): 27-35,63-71
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists