lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 14 Jan 2008 11:11:33 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Andi Kleen <ak@...e.de>
Cc:	travis@....com, Andrew Morton <akpm@...ux-foundation.org>,
	Christoph Lameter <clameter@....com>,
	Jack Steiner <steiner@....com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/10] x86: Reduce memory and intra-node effects with
	large count NR_CPUs


* Andi Kleen <ak@...e.de> wrote:

> > i.e. we've got ~22K bloat per CPU - which is not bad, but because 
> > it's a static component, it hurts smaller boxes. For distributors to 
> > enable CONFIG_NR_CPU=1024 by default i guess that bloat has to drop 
> > below 1-2K per CPU :-/ [that would still mean 1-2MB total bloat but 
> > that's much more acceptable than 23MB]
> 
> Even 1-2MB overhead would be too much for distributors I think. 
> Ideally there must be near zero overhead for possible CPUs (and I see 
> no principle reason why this is not possible) Worst case a low few 
> hundred KBs, but even that would be much.

i think this patchset already gives a net win, by moving stuff from 
NR_CPUS arrays into per_cpu area. (Travis please confirm that this is 
indeed what the numbers show)

The (total-)size of the per-cpu area(s) grows linearly with the number 
of CPUs, so we'll have the expected near-zero overhead on 4-8-16-32 CPUs 
and the expected larger total overhead on 1024 CPUs.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ