lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Jun 2009 13:21:53 -0700
From:	Scott Lurndal <scott.lurndal@...afsystems.com>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>, tj@...nel.org,
	linux-kernel@...r.kernel.org, x86@...nel.org,
	linux-arch@...r.kernel.org, andi@...stfloor.org, hpa@...or.com,
	tglx@...utronix.de
Subject: Re: [PATCHSET] percpu: generalize first chunk allocators and improve lpage NUMA support

On Tue, Jun 30, 2009 at 03:39:52PM -0400, Christoph Lameter wrote:
> On Tue, 30 Jun 2009, Ingo Molnar wrote:
> 
> > Yeah, it's a bug for something like a virtual environment which
> > boots generic kernels that might have 64 possible CPUs (on a true
> > 64-way system), but which will have fewer in practice.
> 
> A machine (and a virtual environment) can indicate via the BIOS tables or
> ACPI that there are less "possible" cpus. That is actually very common.
> 
> The difference between actual and possible cpus only has to be the number
> of processors that could be brought up later. In a regular system that is
> pretty much zero. In a fancy system with actual hotpluggable cpus there
> would be a difference but usually the number of hotpluggable cpus is
> minimal.

A hypervisor running on a 4-socket Beckton platform will have 32
(without hyperthreading) or 64 (with hyperthreading) CPUs to
allocate 1Q2010.   A hypervisor supporting ACPI hot-plug would be able
then to hot-plug from one to 64 cores into a single linux guest if it
wanted to (and didn't overcommit, so gang-scheduling wasn't required).

Such a hypervisor might typically pass the maximum number of
CPUs in via the guest ACPI tables when the guest boots, to provide
the maximum flexibility in being able to grow and shrink the guest,
with only a subset marked as initially present.

I've been working recently with a 128 (shanghai) / 192 (istanbul)
core shared memory system which supports hot plugging CPUs into Linux,
and the feature would be less useful if a hot-plug event could fail
because the per-cpu area couldn't be allocated at the time of the
event (which argues for pre-allocation, if a large physically contiguous
region is required).

That said, I don't see much use for 32-bit kernels in environments with
such CPU counts.

scott
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ