lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 16 Sep 2014 02:03:00 -0500 From: Chuck Ebbert <cebbert.lkml@...il.com> To: Ingo Molnar <mingo@...nel.org> Cc: Peter Zijlstra <peterz@...radead.org>, Dave Hansen <dave@...1.net>, linux-kernel@...r.kernel.org, borislav.petkov@....com, andreas.herrmann3@....com, hpa@...ux.intel.com, ak@...ux.intel.com Subject: Re: [PATCH] x86: Consider multiple nodes in a single socket to be "sane" On Tue, 16 Sep 2014 08:44:03 +0200 Ingo Molnar <mingo@...nel.org> wrote: > > * Chuck Ebbert <cebbert.lkml@...il.com> wrote: > > > On Tue, 16 Sep 2014 05:29:20 +0200 > > Peter Zijlstra <peterz@...radead.org> wrote: > > > > > On Mon, Sep 15, 2014 at 03:26:41PM -0700, Dave Hansen wrote: > > > > > > > > I'm getting the spew below when booting with Haswell (Xeon > > > > E5-2699) CPUs and the "Cluster-on-Die" (CoD) feature > > > > enabled in the BIOS. > > > > > > What is that cluster-on-die thing? I've heard it before but > > > never could find anything on it. > > > > Each CPU has 2.5MB of L3 connected together in a ring that > > makes it all act like a single shared cache. The HW tries to > > place the data so it's closest to the CPU that uses it. On the > > larger processors there are two rings with an interconnect > > between them that adds latency if a cache fetch has to cross > > that. CoD breaks that connection and effectively gives you two > > nodes on one die. > > Note that that's not really a 'NUMA node' in the way lots of > places in the kernel assume it: permanent placement assymetry > (and access cost assymetry) of RAM. > > It's a new topology construct that needs new handling (and > probably a new mask): Non Uniform Cache Architecture (NUCA) > or so. Hmm, looking closer at the diagram, each ring has its own memory controller, so it really is NUMA if you break the interconnect between that caches. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists