lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 4 Aug 2008 11:11:58 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	Matthew Wilcox <matthew@....cx>,
	Pekka Enberg <penberg@...helsinki.fi>,
	akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, Mel Gorman <mel@...net.ie>,
	andi@...stfloor.org
Subject: Re: No, really, stop trying to delete slab until you've finished
 making slub perform as well

On Mon, 04 Aug 2008 08:43:21 -0500
Christoph Lameter <cl@...ux-foundation.org> wrote:
> Matthew Wilcox wrote:
> > On Fri, May 09, 2008 at 07:21:01PM -0700, Christoph Lameter wrote:
> >> - Add a patch that obsoletes SLAB and explains why SLOB does not support
> >>   defrag (Either of those could be theoretically equipped to support
> >>   slab defrag in some way but it seems that Andrew/Linus want to reduce
> >>   the number of slab allocators).
> > 
> > Do we have to once again explain that slab still outperforms slub on at
> > least one important benchmark?  I hope Nick Piggin finds time to finish
> > tuning slqb; it already outperforms slub.
> > 
> 
> Uhh. I forgot to delete that statement. I did not include the patch in the series.
> 
> We have a fundamental issue design issue there. Queuing on free can result in
> better performance as in SLAB. However, it limits concurrency (per node lock
> taking) and causes latency spikes due to queue processing (f.e. one test load
> had 118.65 vs. 34 usecs just by switching to SLUB).
> 
> Could you address the performance issues in different ways? F.e. try to free
> when the object is hot or free from multiple processors? SLAB has to take the
> list_lock rather frequently under high concurrent loads (depends on queue
> size). That will not occur with SLUB. So you actually can free (and allocate)
> concurrently with high performance.

I guess you could bypass the queueing on free for objects that
come from a "local" SLUB page, only queueing objects that go
onto remote pages.

That way workloads that already perform well with SLUB should
keep the current performance, while workloads that currently
perform badly with SLUB should get an improvement.

-- 
All Rights Reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ