lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090114150900.GC25401@wotan.suse.de>
Date:	Wed, 14 Jan 2009 16:09:00 +0100
From:	Nick Piggin <npiggin@...e.de>
To:	Pekka Enberg <penberg@...helsinki.fi>
Cc:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Lin Ming <ming.m.lin@...el.com>,
	Christoph Lameter <cl@...ux-foundation.org>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator

On Wed, Jan 14, 2009 at 04:45:15PM +0200, Pekka Enberg wrote:
> Hi Nick,
> 
> On Wed, Jan 14, 2009 at 4:22 PM, Nick Piggin <npiggin@...e.de> wrote:
> > The problem is there was apparently no plan for resolving the SLAB vs SLUB
> > strategy. And then features and things were added to one or the other one.
> > But on the other hand, the SLUB experience was a success in a way because
> > there were a lot of performance regressions found and fixed after it was
> > merged, for example.
> 
> That's not completely true. I can't speak for Christoph, but the
> biggest problem I have is that I have _no way_ of reproducing or
> analyzing the regression. I've tried out various benchmarks I have
> access to but I haven't been able to find anything.
> 
> The hypothesis is that SLUB regresses because of kmalloc()/kfree()
> ping-pong between CPUs and as far as I understood, Christoph thinks we
> can improve SLUB with the per-cpu alloc patches and the freelist
> management rework.
> 
> Don't get me wrong, though. I am happy you are able to work with the
> Intel engineers to fix the long standing issue (I want it fixed too!)
> but I would be happier if the end-result was few simple patches
> against mm/slub.c :-).

Right, but that regression isn't my only problem with SLUB. I think
higher order allocations could be much more damaging for more a wider
class of users. It is less common to see higher order allocation failure
reports in places other than lkml, where people tend to have systems
stay up longer and/or do a wider range of things with them.

The idea of removing queues doesn't seem so good to me. Queues are good.
You amortize or avoid all sorts of things with queues. We have them
everywhere in the kernel ;)

 
> On Wed, Jan 14, 2009 at 4:22 PM, Nick Piggin <npiggin@...e.de> wrote:
> > I'd love to be able to justify replacing SLAB and SLUB today, but actually
> > it is simply never going to be trivial to discover performance regressions.
> > So I don't think outright replacement is great either (consider if SLUB
> > had replaced SLAB completely).
> 
> If you ask me, I wish we *had* removed SLAB so relevant people could
> have made a huge stink out of it and the regression would have been
> taken care quickly ;-).

Well, presumably the stink was made because we've been stuck with SLAB
for 2 years for some reason. But it is not only that one but there were
other regressions too. Point simply is that it would have been much
harder for users to detect if there even is a regression, what with all
the other changes happening.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ