lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 16 Sep 2014 09:46:30 -0700 From: Dave Hansen <dave@...1.net> To: Peter Zijlstra <peterz@...radead.org>, Chuck Ebbert <cebbert.lkml@...il.com> CC: Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org, borislav.petkov@....com, andreas.herrmann3@....com, hpa@...ux.intel.com, ak@...ux.intel.com Subject: Re: [PATCH] x86: Consider multiple nodes in a single socket to be "sane" On 09/16/2014 09:01 AM, Peter Zijlstra wrote: > On Tue, Sep 16, 2014 at 02:03:00AM -0500, Chuck Ebbert wrote: >> Hmm, looking closer at the diagram, each ring has its own memory controller, so >> it really is NUMA if you break the interconnect between that caches. > > How does it do that? Does it split the DIMM slots in two as well, with > half for the one node and the other half for the other? Or will both > 'nodes' share the same local memory? I the diagrams in here are accurate in describing the rings: http://www.enterprisetech.com/2014/09/08/intel-ups-performance-ante-haswell-xeon-chips/ The "nodes" each get their own memory controller and exclusive set of DIMMs. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists