[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070428112117.6170f581@the-village.bc.nu>
Date: Sat, 28 Apr 2007 11:21:17 +0100
From: Alan Cox <alan@...rguk.ukuu.org.uk>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Chinner <dgc@....com>, Christoph Lameter <clameter@....com>,
linux-kernel@...r.kernel.org, Mel Gorman <mel@...net.ie>,
William Lee Irwin III <wli@...omorphy.com>,
Jens Axboe <jens.axboe@...cle.com>,
Badari Pulavarty <pbadari@...il.com>,
Maxim Levitsky <maximlevitsky@...il.com>,
Nick Piggin <nickpiggin@...oo.com.au>
Subject: Re: [00/17] Large Blocksize Support V3
> > Also remember that even if you do larger pages by using virtual pairs or
> > quads of real pages because it helps on some systems you end up needing
> > the same sized sglist as before so you don't make anything worse for
> > half-assed controllers as you get the same I/O size providing they have
> > the minimal 2 or 4 sg list entries (and those that don't are genuinely
> > beyond saving and nowdays very rare)
> >
>
> Could you expand on that a bit please? I don't get it.
Put a 16K "page" into the page cache physically and you need to allocate
1 sg entry and you get a clear benefit, IFF you can allocate the pages.
Put a 16K "page" into the page cache made up of 4 x real 4K pages which
are not physically contiguous and you need 4 sg list entries - which is
no worse than if you were using 4K pages
4 per 16K page cache "logcial page" -> 4 per 16K
1 per 4K physical page for 4K page cache -> 4 per 16K
The only ugly case for the latter is if you are reading something like a
16K page ext3fs from an old IA64 box onto a real computer and you have a
controller with insufficient sg list entries to read a 16K logical page.
At that point the block layer is going to have kittens.
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists