lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0703020903190.3953@woody.linux-foundation.org>
Date:	Fri, 2 Mar 2007 09:16:17 -0800 (PST)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Mark Gross <mgross@...ux.intel.com>
cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Balbir Singh <balbir@...ibm.com>, Mel Gorman <mel@...net.ie>,
	npiggin@...e.de, clameter@...r.sgi.com, mingo@...e.hu,
	jschopp@...tin.ibm.com, arjan@...radead.org, mbligh@...igh.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: The performance and behaviour of the anti-fragmentation related
 patches



On Fri, 2 Mar 2007, Mark Gross wrote:
> > 
> > Yes, the same issues exist for other DRAM forms too, but to a *much* 
> > smaller degree.
> 
> DDR3-1333 may be better than FBDIMM's but don't count on it being much
> better.

Hey, fair enough. But it's not a problem (and it doesn't have a solution) 
today. I'm not sure it's going to have a solution tomorrow either.

> > Also, IN PRACTICE you're never ever going to see this anyway. Almost 
> > everybody wants bank interleaving, because it's a huge performance win on 
> > many loads. That, in turn, means that your memory will be spread out over 
> > multiple DIMM's even for a single page, much less any bigger area.
> 
> 4-way interleave across banks on systems may not be as common as you may
> think for future chip sets.  2-way interleave across DIMMs within a bank
> will stay.

.. and think about a realistic future.

EVERYBODY will do on-die memory controllers. Yes, Intel doesn't do it 
today, but in the one- to two-year timeframe even Intel will.

What does that mean? It means that in bigger systems, you will no longer 
even *have* 8 or 16 banks where turning off a few banks makes sense. 
You'll quite often have just a few DIMM's per die, because that's what you 
want for latency. Then you'll have CSI or HT or another interconnect.

And with a few DIMM's per die, you're back where even just 2-way 
interleaving basically means that in order to turn off your DIMM, you 
probably need to remove HALF the memory for that CPU.

In other words: TURNING OFF DIMM's IS A BEDTIME STORY FOR DIMWITTED 
CHILDREN.

There are maybe a couple machines IN EXISTENCE TODAY that can do it. But 
nobody actually does it in practice, and nobody even knows if it's going 
to be viable (yes, DRAM takes energy, but trying to keep memory free will 
likely waste power *too*, and I doubt anybody has any real idea of how 
much any of this would actually help in practice).

And I don't think that will change. See above. The future is *not* moving 
towards more and more DIMMS. Quite the reverse. On workstations, we are 
currently in the "one or two DIMM's per die". Do you really think that 
will change? Hell no. And in big servers, pretty much everybody agrees 
that we will move towards that, rather than away from it.

So:
 - forget about turning DIMM's off. There is *no* actual data supporting 
   the notion that it's a good idea today, and I seriously doubt you can 
   really argue that it will be a good idea in five or ten years. It's a 
   hardware hack for a hardware problem, and the problems are way too 
   complex for us to solve in time for the solution to be relevant.

 - aim for NUMA memory allocation and turning off whole *nodes*. That's 
   much more likely to be productive in the longer timeframe. And yes, we 
   may well want to do memory compaction for that too, but I suspect that 
   the issues are going to be different (ie the way to do it is to simply 
   prefer certain nodes for certain allocations, and then try to keep the 
   jobs that you know can be idle on other nodes)

Do you actually have real data supporting the notion that turning DIMM's 
off will be reasonable and worthwhile? 

			Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ