[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070919021705.GB20863@linux-os.sc.intel.com>
Date: Tue, 18 Sep 2007 19:17:05 -0700
From: "Siddha, Suresh B" <suresh.b.siddha@...el.com>
To: Christoph Lameter <clameter@....com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@...el.com>,
Nick Piggin <nickpiggin@...oo.com.au>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, mingo@...e.hu,
Mel Gorman <mel@...net.ie>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Matthew.R.wilcox@...el.com
Subject: Re: tbench regression - Why process scheduler has impact on tbench and why small per-cpu slab (SLUB) cache creates the scenario?
On Fri, Sep 14, 2007 at 12:51:34PM -0700, Christoph Lameter wrote:
> On Fri, 14 Sep 2007, Siddha, Suresh B wrote:
> > We are trying to get the latest data with 2.6.23-rc4-mm1 with and without
> > slub. Is this good enough?
>
> Good enough. If you are concerned about the page allocator pass through
> then you may want to test the page allocator pass through patchset
> separately. The fastpath of the page allocator is currently not
> competitive if you always free and allocate a single page. If contiguous
> pages are allocated then the pass through is superior.
We are having all sorts of stability issues with -mm kernels, let alone
perf testing :(
For now, we are trying to do slab Vs slub comparisons for the mainline kernels.
Let's see how that goes.
Meanwhile, any chance that you can point us at relevant recent patches/fixes
that are in -mm and perhaps that can be applied to mainline kernel?
thanks,
suresh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists