[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0703020919350.16719@schroedinger.engr.sgi.com>
Date: Fri, 2 Mar 2007 09:23:49 -0800 (PST)
From: Christoph Lameter <clameter@...r.sgi.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Rik van Riel <riel@...hat.com>, Mel Gorman <mel@...net.ie>,
npiggin@...e.de, mingo@...e.hu, jschopp@...tin.ibm.com,
arjan@...radead.org, torvalds@...ux-foundation.org,
mbligh@...igh.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: The performance and behaviour of the anti-fragmentation related
patches
On Fri, 2 Mar 2007, Andrew Morton wrote:
> > Linux is *not* happy on 256GB systems. Even on some 32GB systems
> > the swappiness setting *needs* to be tweaked before Linux will even
> > run in a reasonable way.
>
> Please send testcases.
It is not happy if you put 256GB into one zone. We are fine with 1k nodes
with 8GB each and a 16k page size (which reduces the number of
page_structs to manage by a fourth). So the total memory is 8TB which is
significantly larger than 256GB.
If we do this node/zone merging and reassign MAX_ORDER blocks to virtual
node/zones for containers (with their own LRU etc) then this would also
reduce the number of page_structs on the list and may make things a bit
easier.
We would then produce the same effect as the partitioning via NUMA nodes
on our 8TB boxes. However, then you still have a bandwidth issue since
your 256 likely only has a single bus and all memory traffic for the
node/zones has to go through this single bottleneck. That bottleneck does
not exist on NUMA machines.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists