lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 3 Oct 2013 12:05:14 +0200
From:	Stephan von Krawczynski <skraw@...net.com>
To:	linux-kernel <linux-kernel@...r.kernel.org>
Subject: NUMA processor numbering

Hello all,

I have a box with this kind of processor (0-31) and 128 GB RAM:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 45
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
stepping        : 7
microcode       : 0x70d
cpu MHz         : 2486.000
cache size      : 20480 KB
physical id     : 0
siblings        : 16
core id         : 0
cpu cores       : 8
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf eagerfp u pni pclmulqdq dtes64 monitor ds_cpl vmx smx
est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt
tsc_deadline_tim er aes xsave avx lahf_lm ida arat xsaveopt pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid 
bogomips        : 4400.12
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:


Now, numactl --hardware shows this:

available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
node 0 size: 64581 MB
node 0 free: 12676 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31
node 1 size: 64637 MB
node 1 free: 10660 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

Physically these are two processors with 8 Cores and 8 HT each.
Does the above output mean that the cores are numbered right across the two
physical cpus? Does this mean one has to pin processes to 0,2,4,... to stay in
"short distance" to node 0 RAM?
If so, it would be a lot better to have them numbered 0-15 and 16-31 for pinning.
Is there a way to achieve this?
Please cc me when answering.

-- 
Regards,
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ