[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090216194401.GC31264@csn.ul.ie>
Date: Mon, 16 Feb 2009 19:44:01 +0000
From: Mel Gorman <mel@....ul.ie>
To: Pekka Enberg <penberg@...helsinki.fi>
Cc: Nick Piggin <nickpiggin@...oo.com.au>,
Nick Piggin <npiggin@...e.de>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Lin Ming <ming.m.lin@...el.com>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
Christoph Lameter <cl@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator (try 2)
On Mon, Feb 16, 2009 at 09:25:35PM +0200, Pekka Enberg wrote:
> Hi Mel,
>
> On Mon, Feb 16, 2009 at 8:42 PM, Mel Gorman <mel@....ul.ie> wrote:
> > Slightly later than hoped for, but here are the results of the profile
> > run between the different slab allocators. It also includes information on
> > the performance on SLUB with the allocator pass-thru logic reverted by commit
> > http://git.kernel.org/?p=linux/kernel/git/penberg/slab-2.6.git;a=commitdiff;h=97a4871761e735b6f1acd3bc7c3bac30dae3eab9
>
> Did you just cherry-pick the patch or did you run it with the
> topic/slub/perf branch?
Cherry picked to minimise the number of factors involved.
> There's a follow-up patch from Yanmin which
> will make a difference for large allocations when page-allocator
> pass-through is reverted:
>
> http://git.kernel.org/?p=linux/kernel/git/penberg/slab-2.6.git;a=commitdiff;h=79b350ab63458ef1d11747b4f119baea96771a6e
>
Is this expected to make a difference to workloads that are not that
allocator intensive? I doubt it'll make much different to speccpu but
conceivably it makes a difference to sysbench.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists