lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 13 Jul 2016 07:06:18 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	"H. Peter Anvin" <hpa@...or.com>, tglx@...utronix.de,
	mingo@...e.hu, ak@...ux.intel.com, linux-kernel@...r.kernel.org
Subject: Re: Odd performance results

On Wed, Jul 13, 2016 at 09:18:17AM +0200, Ingo Molnar wrote:
> 
> * Peter Zijlstra <peterz@...radead.org> wrote:
> 
> > On Tue, Jul 12, 2016 at 10:49:58AM -0700, H. Peter Anvin wrote:
> > > On 07/12/16 08:05, Paul E. McKenney wrote:
> > > The CPU in question (and /proc/cpuinfo should show this) has four cores
> > > with a total of eight threads.  The "siblings" and "cpu cores" fields in
> > > /proc/cpuinfo should show the same thing.  So I am utterly confused
> > > about what is unexpected here?
> > 
> > Typically threads are enumerated differently on Intel parts. Namely:
> > 
> >	cpu_id = core_id + nr_cores * smt_id
> 
> Yeah, they are 'interleaved' at the thread/core level - I suppose to 'mix' them on 
> OS schedulers that don't know about SMT.
> 
> (Fortunately this interleaving is not done across NUMA domains.)
> 
> > $ cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list
> 
> Btw., this command will print out the mappings in order even on larger systems and 
> shows the CPU # as well:
> 
>  $ grep -i . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -t u -k +3 -n
> 
> /sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,60
> /sys/devices/system/cpu/cpu1/topology/thread_siblings_list:1,61
> /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:2,62
> /sys/devices/system/cpu/cpu3/topology/thread_siblings_list:3,63
> /sys/devices/system/cpu/cpu4/topology/thread_siblings_list:4,64
> /sys/devices/system/cpu/cpu5/topology/thread_siblings_list:5,65
> /sys/devices/system/cpu/cpu6/topology/thread_siblings_list:6,66
> /sys/devices/system/cpu/cpu7/topology/thread_siblings_list:7,67
> /sys/devices/system/cpu/cpu8/topology/thread_siblings_list:8,68
> /sys/devices/system/cpu/cpu9/topology/thread_siblings_list:9,69
> /sys/devices/system/cpu/cpu10/topology/thread_siblings_list:10,70
> /sys/devices/system/cpu/cpu11/topology/thread_siblings_list:11,71
> ...
> /sys/devices/system/cpu/cpu116/topology/thread_siblings_list:56,116
> /sys/devices/system/cpu/cpu117/topology/thread_siblings_list:57,117
> /sys/devices/system/cpu/cpu118/topology/thread_siblings_list:58,118
> /sys/devices/system/cpu/cpu119/topology/thread_siblings_list:59,119

Here is what that gets me on the x86 test system I usually use:

/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,32
/sys/devices/system/cpu/cpu1/topology/thread_siblings_list:1,33
/sys/devices/system/cpu/cpu2/topology/thread_siblings_list:2,34
/sys/devices/system/cpu/cpu3/topology/thread_siblings_list:3,35
/sys/devices/system/cpu/cpu4/topology/thread_siblings_list:4,36
/sys/devices/system/cpu/cpu5/topology/thread_siblings_list:5,37
/sys/devices/system/cpu/cpu6/topology/thread_siblings_list:6,38
/sys/devices/system/cpu/cpu7/topology/thread_siblings_list:7,39
/sys/devices/system/cpu/cpu8/topology/thread_siblings_list:8,40
/sys/devices/system/cpu/cpu9/topology/thread_siblings_list:9,41
/sys/devices/system/cpu/cpu10/topology/thread_siblings_list:10,42
/sys/devices/system/cpu/cpu11/topology/thread_siblings_list:11,43

[ . . . ]

/sys/devices/system/cpu/cpu56/topology/thread_siblings_list:24,56
/sys/devices/system/cpu/cpu57/topology/thread_siblings_list:25,57
/sys/devices/system/cpu/cpu58/topology/thread_siblings_list:26,58
/sys/devices/system/cpu/cpu59/topology/thread_siblings_list:27,59
/sys/devices/system/cpu/cpu60/topology/thread_siblings_list:28,60
/sys/devices/system/cpu/cpu61/topology/thread_siblings_list:29,61
/sys/devices/system/cpu/cpu62/topology/thread_siblings_list:30,62
/sys/devices/system/cpu/cpu63/topology/thread_siblings_list:31,63

On my laptop:

/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0-1
/sys/devices/system/cpu/cpu1/topology/thread_siblings_list:0-1
/sys/devices/system/cpu/cpu2/topology/thread_siblings_list:2-3
/sys/devices/system/cpu/cpu3/topology/thread_siblings_list:2-3
/sys/devices/system/cpu/cpu4/topology/thread_siblings_list:4-5
/sys/devices/system/cpu/cpu5/topology/thread_siblings_list:4-5
/sys/devices/system/cpu/cpu6/topology/thread_siblings_list:6-7
/sys/devices/system/cpu/cpu7/topology/thread_siblings_list:6-7

> > The ordering Paul has, namely 0,1 for core0,smt{0,1} is not something
> > I've ever seen on an Intel part. AMD otoh does enumerate their CMT stuff
> > like what Paul has.
> 
> That's more the natural 'direct' mapping from CPU internal topology to CPU id: 
> what's close to each other physically is close to each other in the CPU id space 
> as well.

Agreed!

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ