lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CC846D5.50106@kernel.org>
Date:	Wed, 27 Oct 2010 17:35:49 +0200
From:	Tejun Heo <tj@...nel.org>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	Peter Zijlstra <peterz@...radead.org>,
	Brian Gerst <brgerst@...il.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	mingo@...e.hu
Subject: Re: [PATCH] x86-32: Allocate irq stacks seperate from percpu area

On 10/27/2010 05:21 PM, Eric Dumazet wrote:
> I wish it could explain it.
> I upgraded BIOS to latest one from HP. no change.
> 
> If I remove HOTPLUG support I still get :
> 
> cpu=0 node=1
> cpu=1 node=0
> cpu=2 node=1
> cpu=3 node=0
> cpu=4 node=1
> cpu=5 node=0
> cpu=6 node=1
> cpu=7 node=0
> cpu=8 node=1
> cpu=9 node=0
> cpu=10 node=1
> cpu=11 node=0
> cpu=12 node=1
> cpu=13 node=0
> cpu=14 node=1
> cpu=15 node=0
> 
> [    0.000000] SMP: Allowing 16 CPUs, 0 hotplug CPUs
> [    0.000000] nr_irqs_gsi: 64
> [    0.000000] Allocating PCI resources starting at e4000000 (gap: e4000000:1ac00000)
> [    0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:16 nr_node_ids:8
> [    0.000000] PERCPU: Embedded 16 pages/cpu @f4600000 s42752 r0 d22784 u131072
> [    0.000000] pcpu-alloc: s42752 r0 d22784 u131072 alloc=1*2097152
> [    0.000000] pcpu-alloc: [0] 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15

Hmmm, okay.  Can you please print out early_cpu_to_node() output for
each cpu from arch/x86/kernel/setup_percpu.c::setup_per_cpu_areas()?
BTW, some clarifications.

* In the pcpu-alloc debug message, the n of [n] might not necessarily
  match the NUMA node.

* I was confused before.  If CPU distance reported by
  early_cpu_to_node() is greater than LOCAL_DISTANCE (ie. NUMA
  configuration), cpus will always belong to different [n].  What gets
  adjusted is the size of each unit.

* No matter what, here, the end result is correct.  As there's no low
  memory on node 1, it doesn't matter how the groups are organized in
  the first chunk as long as embedding is used.  And for other chunks,
  pages for each cpu are allocated separatedly w/ cpu_to_node() anyway
  so NUMA affinity will be correct, again, regardless of the group
  organization.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ