[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48970779.80902@linux-foundation.org>
Date: Mon, 04 Aug 2008 08:43:21 -0500
From: Christoph Lameter <cl@...ux-foundation.org>
To: Matthew Wilcox <matthew@....cx>
CC: Pekka Enberg <penberg@...helsinki.fi>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Mel Gorman <mel@...net.ie>, andi@...stfloor.org,
Rik van Riel <riel@...hat.com>
Subject: Re: No, really, stop trying to delete slab until you've finished
making slub perform as well
Matthew Wilcox wrote:
> On Fri, May 09, 2008 at 07:21:01PM -0700, Christoph Lameter wrote:
>> - Add a patch that obsoletes SLAB and explains why SLOB does not support
>> defrag (Either of those could be theoretically equipped to support
>> slab defrag in some way but it seems that Andrew/Linus want to reduce
>> the number of slab allocators).
>
> Do we have to once again explain that slab still outperforms slub on at
> least one important benchmark? I hope Nick Piggin finds time to finish
> tuning slqb; it already outperforms slub.
>
Uhh. I forgot to delete that statement. I did not include the patch in the series.
We have a fundamental issue design issue there. Queuing on free can result in
better performance as in SLAB. However, it limits concurrency (per node lock
taking) and causes latency spikes due to queue processing (f.e. one test load
had 118.65 vs. 34 usecs just by switching to SLUB).
Could you address the performance issues in different ways? F.e. try to free
when the object is hot or free from multiple processors? SLAB has to take the
list_lock rather frequently under high concurrent loads (depends on queue
size). That will not occur with SLUB. So you actually can free (and allocate)
concurrently with high performance.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists