lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Oct 2010 17:21:08 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Brian Gerst <brgerst@...il.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	mingo@...e.hu
Subject: Re: [PATCH] x86-32: Allocate irq stacks seperate from percpu area

Le mercredi 27 octobre 2010 à 16:43 +0200, Tejun Heo a écrit :

> Ah, okay, that explains it.  So, your NUMA table is screwed up.  It
> would be interesting to dig down where the difference between 32 and
> 64bit comes from.  Maybe it's coming from differences in our init code
> rather than from BIOS?
> 

I wish it could explain it.
I upgraded BIOS to latest one from HP. no change.

If I remove HOTPLUG support I still get :


cpu=0 node=1
cpu=1 node=0
cpu=2 node=1
cpu=3 node=0
cpu=4 node=1
cpu=5 node=0
cpu=6 node=1
cpu=7 node=0
cpu=8 node=1
cpu=9 node=0
cpu=10 node=1
cpu=11 node=0
cpu=12 node=1
cpu=13 node=0
cpu=14 node=1
cpu=15 node=0

[    0.000000] SMP: Allowing 16 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 64
[    0.000000] Allocating PCI resources starting at e4000000 (gap: e4000000:1ac00000)
[    0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:16 nr_node_ids:8
[    0.000000] PERCPU: Embedded 16 pages/cpu @f4600000 s42752 r0 d22784 u131072
[    0.000000] pcpu-alloc: s42752 r0 d22784 u131072 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 


# cat /proc/buddyinfo 
Node 0, zone      DMA      0      1      1      1      2      1      1      0      1      1      3 
Node 0, zone   Normal    362    205     46     13      5      2      2      3      3      3    186 
Node 0, zone  HighMem    182    132    102     70     30      2      1      1      1      1    275 
Node 1, zone  HighMem    140     86    107     41     13      3      4      3      2      2    489 



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ