[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0705070901400.4290@schroedinger.engr.sgi.com>
Date: Mon, 7 May 2007 09:06:05 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
cc: David Chinner <dgc@....com>, Theodore Tso <tytso@....edu>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Mel Gorman <mel@...net.ie>,
William Lee Irwin III <wli@...omorphy.com>,
Jens Axboe <jens.axboe@...cle.com>,
Badari Pulavarty <pbadari@...il.com>,
Maxim Levitsky <maximlevitsky@...il.com>
Subject: Re: [00/17] Large Blocksize Support V3
On Mon, 7 May 2007, Eric W. Biederman wrote:
> Yes, instead of having to redesign the interface between the
> fs and the page cache for those filesystems that handle large
> blocks we instead need to redesign significant parts of the VM interface.
> Shift the redesign work to another group of people and call it a trivial.
To some extend that is true. But then there will then also be additional
gain: We can likely get the VM to handle larger pages too which may get
rid of hugetlb fs etc. The work is pretty straightforward: No locking
changes f.e. So hardly a redesign. I think the crucial point is the
antifrag/defrag issue if we want to generalize it.
I have an updated patch here that relies on page reservations. Adds
something called page pools. On bootup you need to specify how many pages
of each size you want. The page cache will then use those pages for
filesystems that need larger blocksize.
The interesting thing about that one is that it actually enables support
foir multiple blocksizes with a single larger pagesize. If f.e. we setup a
pool of 64k pages then the block layer can segment that into 16k pieces.
So one can actually use 16k 32k and 64k block size with a single larger
page size.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists