lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 1 Mar 2007 21:11:58 -0800 (PST)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
cc:	Balbir Singh <balbir@...ibm.com>, Mel Gorman <mel@...net.ie>,
	npiggin@...e.de, clameter@...r.sgi.com, mingo@...e.hu,
	jschopp@...tin.ibm.com, arjan@...radead.org, mbligh@...igh.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: The performance and behaviour of the anti-fragmentation related
 patches



On Thu, 1 Mar 2007, Andrew Morton wrote:
>
> On Thu, 1 Mar 2007 19:44:27 -0800 (PST) Linus Torvalds <torvalds@...ux-foundation.org> wrote:
> 
> > In other words, I really don't see a huge upside. I see *lots* of 
> > downsides, but upsides? Not so much. Almost everybody who wants unplug 
> > wants virtualization, and right now none of the "big virtualization" 
> > people would want to have kernel-level anti-fragmentation anyway sicne 
> > they'd need to do it on their own.
> 
> Agree with all that, but you're missing the other application: power
> saving.  FBDIMMs take eight watts a pop.

This is a hardware problem. Let's see how long it takes for Intel to 
realize that FBDIMM's were a hugely bad idea from a power perspective.

Yes, the same issues exist for other DRAM forms too, but to a *much* 
smaller degree.

Also, IN PRACTICE you're never ever going to see this anyway. Almost 
everybody wants bank interleaving, because it's a huge performance win on 
many loads. That, in turn, means that your memory will be spread out over 
multiple DIMM's even for a single page, much less any bigger area.

In other words - forget about DRAM power savings. It's not realistic. And 
if you want low-power, don't use FBDIMM's. It really *is* that simple.

(And yes, maybe FBDIMM controllers in a few years won't use 8 W per 
buffer. I kind of doubt that, since FBDIMM fairly fundamentally is highish 
voltage swings at high frequencies.)

Also, on a *truly* idle system, we'll see the power savings whatever we 
do, because the working set will fit in D$, and to get those DRAM power 
savings in reality you need to have the DRAM controller shut down on its 
own anyway (ie sw would only help a bit).

The whole DRAM power story is a bedtime story for gullible children. Don't 
fall for it. It's not realistic. The hardware support for it DOES NOT 
EXIST today, and probably won't for several years. And the real fix is 
elsewhere anyway (ie people will have to do a FBDIMM-2 interface, which 
is against the whole point of FBDIMM in the first place, but that's what 
you get when you ignore power in the first version!).

		Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ