lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4970CDB6.6040705@hp.com>
Date:	Fri, 16 Jan 2009 10:11:02 -0800
From:	Rick Jones <rick.jones2@...com>
To:	Nick Piggin <nickpiggin@...oo.com.au>
CC:	Andrew Morton <akpm@...ux-foundation.org>, netdev@...r.kernel.org,
	sfr@...b.auug.org.au, matthew@....cx, matthew.r.wilcox@...el.com,
	chinang.ma@...el.com, linux-kernel@...r.kernel.org,
	sharad.c.tripathi@...el.com, arjan@...ux.intel.com,
	andi.kleen@...el.com, suresh.b.siddha@...el.com,
	harita.chilukuri@...el.com, douglas.w.styner@...el.com,
	peter.xihong.wang@...el.com, hubert.nueckel@...el.com,
	chris.mason@...cle.com, srostedt@...hat.com,
	linux-scsi@...r.kernel.org, andrew.vasquez@...gic.com,
	anirban.chakraborty@...gic.com
Subject: Re: Mainline kernel OLTP performance update

Nick Piggin wrote:
> OK, I have these numbers to show I'm not completely off my rocker to suggest
> we merge SLQB :) Given these results, how about I ask to merge SLQB as default
> in linux-next, then if nothing catastrophic happens, merge it upstream in the
> next merge window, then a couple of releases after that, given some time to
> test and tweak SLQB, then we plan to bite the bullet and emerge with just one
> main slab allocator (plus SLOB).
> 
> 
> System is a 2socket, 4 core AMD. 

Not exactly a large system :)  Barely NUMA even with just two sockets.

> All debug and stats options turned off for
> all the allocators; default parameters (ie. SLUB using higher order pages,
> and the others tend to be using order-0). SLQB is the version I recently
> posted, with some of the prefetching removed according to Pekka's review
> (probably a good idea to only add things like that in if/when they prove to
> be an improvement).
> 
> ...
 >
> Netperf UDP unidirectional send test (10 runs, higher better):
> 
> Server and client bound to same CPU
> SLAB AVG=60.111 STD=1.59382
> SLQB AVG=60.167 STD=0.685347
> SLUB AVG=58.277 STD=0.788328
> 
> Server and client bound to same socket, different CPUs
> SLAB AVG=85.938 STD=0.875794
> SLQB AVG=93.662 STD=2.07434
> SLUB AVG=81.983 STD=0.864362
> 
> Server and client bound to different sockets
> SLAB AVG=78.801 STD=1.44118
> SLQB AVG=78.269 STD=1.10457
> SLUB AVG=71.334 STD=1.16809
 > ...
> I haven't done any non-local network tests. Networking is the one of the
> subsystems most heavily dependent on slab performance, so if anybody
> cares to run their favourite tests, that would be really helpful.

I'm guessing, but then are these Mbit/s figures? Would that be the sending 
throughput or the receiving throughput?

I love to see netperf used, but why UDP and loopback?  Also, how about the 
service demands?

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ