lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1010191337370.20631@chino.kir.corp.google.com>
Date:	Tue, 19 Oct 2010 13:39:55 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Christoph Lameter <cl@...ux.com>
cc:	Pekka Enberg <penberg@...helsinki.fi>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [UnifiedV4 00/16] The Unified slab allocator (V4)

On Tue, 5 Oct 2010, Christoph Lameter wrote:

> V3->V4:
> - Lots of debugging
> - Performance optimizations (more would be good)...
> - Drop per slab locking in favor of per node locking for
>   partial lists (queuing implies freeing large amounts of objects
>   to per node lists of slab).
> - Implement object expiration via reclaim VM logic.
> 

I applied this set on top of Pekka's for-next tree reverted back to 
5d1f57e4 since it doesn't apply later then that.

Overall, the results are _much_ better than the vanilla slub allocator 
that I frequently saw ~20% regressions with the TCP_RR netperf benchmark 
on a couple of my machines with larger cpu counts.  However, there still 
is a significant performance degradation compared to slab.

When running this patchset on two (client and server running 
netperf-2.4.5) four 2.2GHz quad-core AMD processors with 64GB of memory,
here're the results:

	threads		SLAB		SLUB		diff
	16		207038		184389		-10.9%
	32		266105		234386		-11.9%
	48		287989		252733		-12.2%
	64		307572		277221		- 9.9%		
	80		309802		284199		- 8.3%
	96		302959		291743		- 3.7%
	112		307381		297459		- 3.2%
	128		314582		299340		- 4.8%
	144		331945		299648		- 9.7%
	160		321882		314192		- 2.4%
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ