lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1108012112420.7373@chino.kir.corp.google.com>
Date:	Mon, 1 Aug 2011 21:15:55 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Christoph Lameter <cl@...ux.com>
cc:	Pekka Enberg <penberg@...helsinki.fi>,
	Andi Kleen <andi@...stfloor.org>, tj@...nel.org,
	Metathronius Galabant <m.galabant@...glemail.com>,
	Matt Mackall <mpm@...enic.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Adrian Drzewiecki <z@...e.net>, linux-kernel@...r.kernel.org
Subject: Re: [slub p3 0/7] SLUB: [RFC] Per cpu partial lists V3

On Mon, 1 Aug 2011, Christoph Lameter wrote:

> Performance:
> 
> 				Before		After
> ./hackbench 100 process 200000
> 				Time: 2299.072	1742.454
> ./hackbench 100 process 20000
> 				Time: 224.654	182.393
> ./hackbench 100 process 20000
> 				Time: 227.126	182.780
> ./hackbench 100 process 20000
> 				Time: 219.608	182.899
> ./hackbench 10 process 20000
> 				Time: 21.769	18.756
> ./hackbench 10 process 20000
> 				Time: 21.657	18.938
> ./hackbench 10 process 20000
> 				Time: 23.193	19.537
> ./hackbench 1 process 20000
> 				Time: 2.337	2.263
> ./hackbench 1 process 20000
> 				Time: 2.223	2.271
> ./hackbench 1 process 20000
> 				Time: 2.269	2.301
> 

This applied nicely to Linus' tree so I've moved to testing atop that 
rather than slub/lockless on the same netperf testing environment as the 
slab vs. slub comparison.  The benchmarking completed without error and 
here are the results:

	threads		before		after
	 16		75509		75443  (-0.1%)
	 32		118121		117558 (-0.5%)
	 48		149997		149514 (-0.3%)
	 64		185216		186772 (+0.8%)
	 80		221195		222612 (+0.6%)
	 96		239732		241089 (+0.6%)
	112		261967		266643 (+1.8%)
	128		272946		281794 (+3.2%)
	144		279202		289421 (+3.7%)
	160		285745		297216 (+4.0%)

So the patchset certainly looks helpful, especially if it improves other 
benchmarks as well.

I'll review the patches individually, starting with the cleanup patches 
that can hopefully be pushed quickly while we discuss per-cpu partial 
lists further.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ