lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 18 Sep 2012 07:05:51 +0200
From:	Mike Galbraith <efault@....de>
To:	Linda Walsh <lkml@...nx.org>
Cc:	Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: Re: 2 physical-cpu (like 2x6core) config and NUMA?

On Mon, 2012-09-17 at 11:00 -0700, Linda Walsh wrote: 
> I was wondering, on dual processor MB's, Intel uses dedicated memory for
> 
> each cpu ....  6 memchips in the X5XXX series, and to access the memory
> of the other chip's cores, the memory has to be transferred over the QPI
> bus.
> 
> So wouldn't it be of benefit if such dual chip configurations were to
> be setup as 'NUMA', as there is a higher cost between migrating 
> memory/processes
> between Cores on different chips vs. on the same chip?  
> 
> I note from 'cpupower -c all frequency-info, that the "odd" cpu-cores
> all hve to run at the same clock frequency, and the "even" all have
> to run together, which I take to mean that the odd number cores are
> on 1 chip and the even numbered cores are on the other chip.
> 
> Since the QPI path is limited and appears to be < the local memory access
> rate, wouldn't it be appropriate if 2 cpu-chip setups were configured
> as 2 NUMA cores?  
> 
> Although -- I have no clue how the memory space is divided between the
> two cores -- i.e. I don't know if say, I have 24G on each, if they
> alternate 4G in the physical address space or what (that would all be
> handed (or mapped) before the chips come up.. so it could be contiguous).
> 
> 
> Does the kernel support scheduling based on the different speed of
> memory between "on die" vs. "off die"?   I was surprised to see
> that it viewed my system as 1 NUMA node with all 12 on 1 node -- when
> I know that it is physically organized as 2x6.

Yeah, the scheduler will setup for numa if srat says the box is numa.

I have a 64 core DL980 box that numactl --hardware says is a single
node, but that's due to ram truly _existing_ only on one node.   Not a
wonderful (or even supported) setup.

If ram isn't physically plugged into the right spots, or some bios
option makes the box appear to be single node, that's what you'll see
too, (SIBLING maybe) MC and CPU domains, but no NUMA.

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ