lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1259090615.17871.696.camel@calx>
Date:	Tue, 24 Nov 2009 13:23:35 -0600
From:	Matt Mackall <mpm@...enic.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	paulmck@...ux.vnet.ibm.com, Pekka Enberg <penberg@...helsinki.fi>,
	linux-mm@...ck.org, cl@...ux-foundation.org,
	LKML <linux-kernel@...r.kernel.org>,
	Nick Piggin <npiggin@...e.de>
Subject: Re: lockdep complaints in slab allocator

On Tue, 2009-11-24 at 19:14 +0100, Peter Zijlstra wrote:
> On Tue, 2009-11-24 at 11:12 -0600, Matt Mackall wrote:
> > On Tue, 2009-11-24 at 09:00 -0800, Paul E. McKenney wrote:
> > > On Tue, Nov 24, 2009 at 05:33:26PM +0100, Peter Zijlstra wrote:
> > > > On Mon, 2009-11-23 at 21:13 +0200, Pekka Enberg wrote:
> > > > > Matt Mackall wrote:
> > > > > > This seems like a lot of work to paper over a lockdep false positive in
> > > > > > code that should be firmly in the maintenance end of its lifecycle? I'd
> > > > > > rather the fix or papering over happen in lockdep.
> > > > > 
> > > > > True that. Is __raw_spin_lock() out of question, Peter?-) Passing the 
> > > > > state is pretty invasive because of the kmem_cache_free() call in 
> > > > > slab_destroy(). We re-enter the slab allocator from the outer edges 
> > > > > which makes spin_lock_nested() very inconvenient.
> > > > 
> > > > I'm perfectly fine with letting the thing be as it is, its apparently
> > > > not something that triggers very often, and since slab will be killed
> > > > off soon, who cares.
> > > 
> > > Which of the alternatives to slab should I be testing with, then?
> > 
> > I'm guessing your system is in the minority that has more than $10 worth
> > of RAM, which means you should probably be evaluating SLUB.
> 
> Well, I was rather hoping that'd die too ;-)
> 
> Weren't we going to go with SLQB?

News to me. Perhaps it was discussed at KS.

My understanding of the current state of play is:

SLUB: default allocator
SLAB: deep maintenance, will be removed if SLUB ever covers remaining
performance regressions
SLOB: useful for low-end (but high-volume!) embedded 
SLQB: sitting in slab.git#for-next for months, has some ground to cover

SLQB and SLUB have pretty similar target audiences, so I agree we should
eventually have only one of them. But I strongly expect performance
results to be mixed, just as they have been comparing SLUB/SLAB.
Similarly, SLQB still has of room for tuning left compared to SLUB, as
SLUB did compared to SLAB when it first emerged. It might be a while
before a clear winner emerges.

-- 
http://selenic.com : development and support for Mercurial and Linux


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ