[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0704260028190.31003@schroedinger.engr.sgi.com>
Date: Thu, 26 Apr 2007 00:34:50 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Nick Piggin <nickpiggin@...oo.com.au>
cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
linux-kernel@...r.kernel.org, Mel Gorman <mel@...net.ie>,
William Lee Irwin III <wli@...omorphy.com>,
David Chinner <dgc@....com>,
Jens Axboe <jens.axboe@...cle.com>,
Badari Pulavarty <pbadari@...il.com>,
Maxim Levitsky <maximlevitsky@...il.com>
Subject: Re: [00/17] Large Blocksize Support V3
On Thu, 26 Apr 2007, Nick Piggin wrote:
> No I don't want to add another fs layer.
Well maybe you could explain what you want. Preferably without redefining
the established terms?
> I still don't think anti fragmentation or defragmentation are a good
> approach, when you consider the alternatives.
I have not heard of any alternatives in this discussion here. Just the old
line of lets tune the VM here and there and hope it lasts a while longer.
> OK, I would like to see them. And also discussions of things like why
> we shouldn't increase PAGE_SIZE instead.
Because 4k is a good page size that is bound to the binary format? Frankly
there is no point in having my text files in large page sizes. However,
when I read a dvd then I may want to transfer 64k chunks or when use my
flash drive I may want to transfer 128k chunks. And yes if a scientific
application needs to do data dump then it should be able to use very high
page sizes (megabytes, gigabytes) to be able to continue its work while
the huge dumps runs at full I/O speed ...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists