lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200902032136.26022.nickpiggin@yahoo.com.au>
Date:	Tue, 3 Feb 2009 21:36:24 +1100
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Mel Gorman <mel@....ul.ie>
Cc:	Pekka Enberg <penberg@...helsinki.fi>,
	Nick Piggin <npiggin@...e.de>,
	Linux Memory Management List <linux-mm@...ck.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Lin Ming <ming.m.lin@...el.com>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Christoph Lameter <cl@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator (try 2)

On Tuesday 03 February 2009 21:12:06 Mel Gorman wrote:
> On Mon, Jan 26, 2009 at 10:48:26AM +0200, Pekka Enberg wrote:
> > Hi Nick,
> >
> > On Fri, 2009-01-23 at 16:46 +0100, Nick Piggin wrote:
> > > Since last time, fixed bugs pointed out by Hugh and Andi, cleaned up
> > > the code suggested by Ingo (haven't yet incorporated Ingo's last
> > > patch).
> > >
> > > Should have fixed the crash reported by Yanmin (I was able to reproduce
> > > it on an ia64 system and fix it).
> > >
> > > Significantly reduced static footprint of init arrays, thanks to Andi's
> > > suggestion.
> > >
> > > Please consider for trial merge for linux-next.
> >
> > I merged a the one you resent privately as this one didn't apply at all.
> > The code is in topic/slqb/core branch of slab.git and should appear in
> > linux-next tomorrow.
> >
> > Testing and especially performance testing is welcome. If any of the HPC
> > people are reading this, please do give SLQB a good beating as Nick's
> > plan is to replace both, SLAB and SLUB, with it in the long run.As
> > Christoph has expressed concerns over latency issues of SLQB, I suppose
> > it would be interesting to hear if it makes any difference to the
> > real-time folks.
>
> The HPC folks care about a few different workloads but speccpu is one that
> shows up. I was in the position to run tests because I had put together
> the test harness for a paper I spent the last month writing. This mail
> shows a comparison between slab, slub and slqb for speccpu2006 running a
> single thread and sysbench ranging clients from 1 to 4*num_online_cpus()
> (16 in both cases). Additional tests were not run because just these two
> take one day per kernel to complete. Results are ratios to the SLAB figures
> and based on an x86-64 and ppc64 machine.

Hi Mel,

This is very nice, thanks for testing. SLQB and SLUB are quite similar
in a lot of cases, which indeed could be explained by cacheline placement
(both of these can allocate down to much smaller sizes, and both of them
also put metadata directly in free object memory rather than external
locations).

But it will be interesting to try looking at some of the tests where
SLQB has larger regressions, so that might give me something to go on
if I can lay my hands on speccpu2006...

I'd be interested to see how slub performs if booted with slub_min_objects=1
(which should give similar order pages to SLAB and SLQB).


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ