[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0710041034550.10854@schroedinger.engr.sgi.com>
Date: Thu, 4 Oct 2007 10:38:15 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Matthew Wilcox <willy@...ux.intel.com>
cc: Nick Piggin <nickpiggin@...oo.com.au>,
Christoph Hellwig <hch@....de>, Mel Gorman <mel@...net.ie>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
David Chinner <dgc@....com>, Jens Axboe <jens.axboe@...cle.com>
Subject: Re: SLUB performance regression vs SLAB
On Thu, 4 Oct 2007, Matthew Wilcox wrote:
> So, on "a well-known OLTP benchmark which prohibits publishing absolute
> numbers" and on an x86-64 system (I don't think exactly which model
> is important), we're seeing *6.51%* performance loss on slub vs slab.
> This is with a 2.6.23-rc3 kernel. Tuning the boot parameters, as you've
> asked for before (slub_min_order=2, slub_max_order=4, slub_min_objects=8)
> gets back 0.38% of that. It's still down 6.13% over slab.
Yeah the fastpath vs. slow path is not the issue as Siddha and I concluded
earlier. Seems that we are mainly seeing cacheline bouncing due to two
cpus accessing meta data in the same page struct. The patches in
MM that are scheduled to be merged for .24 address that issue. I
have repeatedly asked that these patches be tested. The patches were
posted months ago.
> Now, where do we go next? I suspect that 2.6.23-rc9 has significant
> changes since -rc3, but I'd like to confirm that before kicking off
> another (expensive) run. Please, tell me what useful kernels are to test.
I thought Siddha has a test in the works with the per cpu structure
patchset from MM? Could you sync up with Siddha?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists