lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090630213146.GA17492@elte.hu>
Date:	Tue, 30 Jun 2009 23:31:46 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>, tj@...nel.org,
	linux-kernel@...r.kernel.org, x86@...nel.org,
	linux-arch@...r.kernel.org, andi@...stfloor.org, hpa@...or.com,
	tglx@...utronix.de
Subject: Re: [PATCHSET] percpu: generalize first chunk allocators and
	improve lpage NUMA support


* Christoph Lameter <cl@...ux-foundation.org> wrote:

> On Tue, 30 Jun 2009, Ingo Molnar wrote:
> 
> > Yeah, it's a bug for something like a virtual environment which 
> > boots generic kernels that might have 64 possible CPUs (on a 
> > true 64-way system), but which will have fewer in practice.

i think this bit should be quoted too, because it is the crux of the 
issue:

> > It's pretty basic stuff: the on-demand allocation of percpu 
> > resources.

> A machine (and a virtual environment) can indicate via the BIOS 
> tables or ACPI that there are less "possible" cpus. That is 
> actually very common.
> 
> The difference between actual and possible cpus only has to be the 
> number of processors that could be brought up later. In a regular 
> system that is pretty much zero. In a fancy system with actual 
> hotpluggable cpus there would be a difference but usually the 
> number of hotpluggable cpus is minimal.

You are arguing against the concept of the demand-allocation of 
resources, and i dont think that technical argument can be won.

Sure you dont have to demand-allocate if you know the demand 
beforehand and can preallocate and size accordingly.

But what if not? What if the kernel can run on up to 4096 CPUs and 
runs on a big box. Why should a virtual machine have the illogical 
choice between either wasting a lot of RAM preallocating stuff, or 
limiting its own extendability.

In other words: your proposed change in essence reduces the utility 
of CPU hotplug. It's a bad idea.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ