[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4999BBE6.2080003@cs.helsinki.fi>
Date: Mon, 16 Feb 2009 21:17:58 +0200
From: Pekka Enberg <penberg@...helsinki.fi>
To: Mel Gorman <mel@....ul.ie>
CC: Nick Piggin <nickpiggin@...oo.com.au>,
Nick Piggin <npiggin@...e.de>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Lin Ming <ming.m.lin@...el.com>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
Christoph Lameter <cl@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator (try 2)
Hi Mel,
Mel Gorman wrote:
> I haven't done much digging in here yet. Between the large page bug and
> other patches in my inbox, I haven't had the chance yet but that doesn't
> stop anyone else taking a look.
So how big does an improvement/regression have to be not to be
considered within noise? I mean, I randomly picked one of the results
("x86-64 speccpu integer tests") and ran it through my "summarize"
script and got the following results:
min max mean std_dev
slub 0.96 1.09 1.01 0.04
slub-min 0.95 1.10 1.00 0.04
slub-rvrt 0.90 1.08 0.99 0.05
slqb 0.96 1.07 1.00 0.04
Apart from slub-rvrt (which seems to be regressing, interesting) all the
allocators seem to perform equally well. Hmm?
Btw, Yanmin, do you have access to the tests Mel is running (especially
the ones where slub-rvrt seems to do worse)? Can you see this kind of
regression? The results make we wonder whether we should avoid reverting
all of the page allocator pass-through and just add a kmalloc cache for
8K allocations. Or not address the netperf regression at all. Double-hmm.
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists