lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Nov 2009 23:07:30 +0200
From:	Pekka Enberg <penberg@...helsinki.fi>
To:	Matt Mackall <mpm@...enic.com>
Cc:	Peter Zijlstra <peterz@...radead.org>, paulmck@...ux.vnet.ibm.com,
	linux-mm@...ck.org, cl@...ux-foundation.org,
	LKML <linux-kernel@...r.kernel.org>,
	Nick Piggin <npiggin@...e.de>
Subject: Re: lockdep complaints in slab allocator

On Tue, Nov 24, 2009 at 9:23 PM, Matt Mackall <mpm@...enic.com> wrote:
> On Tue, 2009-11-24 at 19:14 +0100, Peter Zijlstra wrote:
>> On Tue, 2009-11-24 at 11:12 -0600, Matt Mackall wrote:
>> > On Tue, 2009-11-24 at 09:00 -0800, Paul E. McKenney wrote:
>> > > On Tue, Nov 24, 2009 at 05:33:26PM +0100, Peter Zijlstra wrote:
>> > > > On Mon, 2009-11-23 at 21:13 +0200, Pekka Enberg wrote:
>> > > > > Matt Mackall wrote:
>> > > > > > This seems like a lot of work to paper over a lockdep false positive in
>> > > > > > code that should be firmly in the maintenance end of its lifecycle? I'd
>> > > > > > rather the fix or papering over happen in lockdep.
>> > > > >
>> > > > > True that. Is __raw_spin_lock() out of question, Peter?-) Passing the
>> > > > > state is pretty invasive because of the kmem_cache_free() call in
>> > > > > slab_destroy(). We re-enter the slab allocator from the outer edges
>> > > > > which makes spin_lock_nested() very inconvenient.
>> > > >
>> > > > I'm perfectly fine with letting the thing be as it is, its apparently
>> > > > not something that triggers very often, and since slab will be killed
>> > > > off soon, who cares.
>> > >
>> > > Which of the alternatives to slab should I be testing with, then?
>> >
>> > I'm guessing your system is in the minority that has more than $10 worth
>> > of RAM, which means you should probably be evaluating SLUB.
>>
>> Well, I was rather hoping that'd die too ;-)
>>
>> Weren't we going to go with SLQB?
>
> News to me. Perhaps it was discussed at KS.

Yes, we discussed this at KS. The plan was to merge SLQB to mainline
so people can test it more easily but unfortunately it hasn't gotten
any loving from Nick recently which makes me think it's going to miss
the merge window for .33 as well.

> My understanding of the current state of play is:
>
> SLUB: default allocator
> SLAB: deep maintenance, will be removed if SLUB ever covers remaining
> performance regressions
> SLOB: useful for low-end (but high-volume!) embedded
> SLQB: sitting in slab.git#for-next for months, has some ground to cover
>
> SLQB and SLUB have pretty similar target audiences, so I agree we should
> eventually have only one of them. But I strongly expect performance
> results to be mixed, just as they have been comparing SLUB/SLAB.
> Similarly, SLQB still has of room for tuning left compared to SLUB, as
> SLUB did compared to SLAB when it first emerged. It might be a while
> before a clear winner emerges.

Yeah, something like that. I don't think we were really able to decide
anything at the KS. IIRC Christoph was in favor of having multiple
slab allocators in the tree whereas I, for example, would rather have
only one. The SLOB allocator is bit special here because it's for
embedded. However, I also talked to some embedded folks at the summit
and none of them were using SLOB because the gains weren't big enough.
So I don't know if it's being used that widely.

I personally was hoping for SLUB or SLQB to emerge as a clear winner
so we could delete the rest but that hasn't really happened.

                         Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ