[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070428142834.95bcf9d1.akpm@linux-foundation.org>
Date: Sat, 28 Apr 2007 14:28:34 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: William Lee Irwin III <wli@...omorphy.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Nick Piggin <nickpiggin@...oo.com.au>,
David Chinner <dgc@....com>,
Christoph Lameter <clameter@....com>,
linux-kernel@...r.kernel.org, Mel Gorman <mel@...net.ie>,
Jens Axboe <jens.axboe@...cle.com>,
Badari Pulavarty <pbadari@...il.com>,
Maxim Levitsky <maximlevitsky@...il.com>
Subject: Re: [00/17] Large Blocksize Support V3
On Sat, 28 Apr 2007 12:19:56 -0700 William Lee Irwin III <wli@...omorphy.com> wrote:
> I'm skeptical, however, that the contiguity gains will compensate for
> the CPU required to do such with the pcp lists.
It wouldn't surprise me if approximate contiguity is a pretty common case
in the pcp lists. Recaim isn't very important here: most pages get freed
in truncate and particularly unmap_vmas. If the allocator is handing out
pages in reasonably contiguous fashion (and it does, and we're talking
about strengthening that) then I'd expect that very often we end up freeing
pages which have a lot of locality too. So the sort of tricks which you're
discussing might get a pretty good hit rate.
otoh, it's not obvious to me that there's a lot to be gained here. If we
repeatedly call the buddy allocator freeing contiguous order-0 pages, all
the data structures which are needed to handle those should be in L1 cache
and the buddy itself becomes our point-of-collection, if you see what I
mean.
Dunno. Profiling should tell?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists