[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0703012236160.1979@schroedinger.engr.sgi.com>
Date: Thu, 1 Mar 2007 22:51:00 -0800 (PST)
From: Christoph Lameter <clameter@...r.sgi.com>
To: Nick Piggin <npiggin@...e.de>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@...net.ie>, mingo@...e.hu,
jschopp@...tin.ibm.com, arjan@...radead.org,
torvalds@...ux-foundation.org, mbligh@...igh.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: The performance and behaviour of the anti-fragmentation related
patches
On Fri, 2 Mar 2007, Nick Piggin wrote:
> > There was no talk about slightly. 1G page size would actually be quite
> > convenient for some applications.
>
> But it is far from convenient for the kernel. So we have hugepages, so
> we can stay out of the hair of those applications and they can stay out
> of hours.
Huge pages cannot do I/O so we would get back to the gazillions of pages
to be handled for I/O. I'd love to have I/O support for huge pages. This
would address some of the issues.
> > Writing a terabyte of memory to disk with handling 256 billion page
> > structs? In case of a system with 1 petabyte of memory this may be rather
> > typical and necessary for the application to be able to save its state
> > on disk.
>
> But you will have newer IO controllers, faster CPUs...
Sure we will. And you believe that the the newer controllers will be able
to magically shrink the the SG lists somehow? We will offload the
coalescing of the page structs into bios in hardware or some such thing?
And the vmscans etc too?
> Is it a problem or isn't it? Waving around the 256 billion number isn't
> impressive because it doesn't really say anything.
It is the number of items that needs to be handled by the I/O layer and
likely by the SG engine.
> I understand you have controllers (or maybe it is a block layer limit)
> that doesn't work well with 4K pages, but works OK with 16K pages.
Really? This is the first that I have heard about it.
> This is not something that we would introduce variable sized pagecache
> for, surely.
I am not sure where you get the idea that this is the sole reason why we
need to be able to handle larger contiguous chunks of memory.
How about coming up with a response to the issue at hand? How do I write
back 1 Terabyte effectively? Ok this may be an exotic configuration today
but in one year this may be much more common. Memory sizes keep on
increasing and so is the number of page structs to be handled for I/O. At
some point we need a solution here.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists