[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070518004304.14db3eef.akpm@linux-foundation.org>
Date: Fri, 18 May 2007 00:43:04 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Nick Piggin <npiggin@...e.de>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-arch@...r.kernel.org
Subject: Re: [rfc] increase struct page size?!
On Fri, 18 May 2007 09:32:23 +0200 Nick Piggin <npiggin@...e.de> wrote:
> On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
> > On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin <npiggin@...e.de> wrote:
> >
> > > Many batch operations on struct page are completely random,
> >
> > But they shouldn't be: we should aim to place physically contiguous pages
> > into logically contiguous pagecache slots, for all the reasons we
> > discussed.
>
> For big IO batch operations, pagecache would be more likely to be
> physically contiguous, as would LRU, I suppose.
read(), write(), truncate(), writeback, pagefault. Pretty common stuff.
> I'm more thinking of operations where things get reclaimed over time,
> touched or dirtied in slightly different orderings, interleaved with
> other allocations, etc.
Yes, that can happen. But in such cases we by definition aren't touching
the pageframes very often. I'd assert that when the kernel is really
hitting those pageframes hard, it is commonly doing this in ascending
pagecache order.
>
> > If/when that happens, there will be a *lot* of locality of reference
> > against the pageframes in a lot of important codepaths.
>
> And when it doesn't happen, we eat 75% more cache misses. And for that
> matter we eat 75% more cache misses for non-batch operations like
> allocating or freeing a page by slab, for example.
"measure twice, cut once"
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists