[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0805141521550.20277@schroedinger.engr.sgi.com>
Date: Wed, 14 May 2008 15:32:17 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Matthew Wilcox <matthew@....cx>
cc: Andi Kleen <andi@...stfloor.org>,
Pekka Enberg <penberg@...helsinki.fi>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Rik van Riel <riel@...hat.com>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Mel Gorman <mel@...net.ie>, mpm@...enic.com,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Subject: Re: [patch 21/21] slab defrag: Obsolete SLAB
On Wed, 14 May 2008, Matthew Wilcox wrote:
> Since there's no way we've found to date to get the TPC test to you,
> how about we settle for analysing _this_ testcase which did show a
> significant performance degradation for slub?
>
> I don't think it's an unreasonable testcase either -- effectively it's
> allocating memory on all CPUs and then freeing it all on one. If that's
> a worst-case scenario for slub, then slub isn't suitable for replacing
> slab yet.
Indeed that is a worst case scenario due to finer grained locking. The
opposite side of that is that fast concurrent freeing of objects from two
processors will have higher performance in slub since there is
significantly less global lock contention and less work with expiring
objects and moving them around (if you hit the queue limits then SLAB
will do synchroonous merging of objects into slabs, its then no longer
able to hide the object handling overhead in cache_reap().)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists