lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 04 May 2007 15:43:29 -0700
From:	Tim Chen <tim.c.chen@...ux.intel.com>
To:	Christoph Lameter <clameter@....com>
Cc:	"Chen, Tim C" <tim.c.chen@...el.com>,
	"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
	"Zhang, Yanmin" <yanmin.zhang@...el.com>,
	"Wang, Peter Xihong" <peter.xihong.wang@...el.com>,
	Arjan van de Ven <arjan@...radead.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: RE: Regression with SLUB on Netperf and Volanomark

On Fri, 2007-05-04 at 11:27 -0700, Christoph Lameter wrote:

> 
> Not sure where to go here. Increasing the per cpu slab size may hold off 
> the issue up to a certain cpu cache size. For that we would need to 
> identify which slabs create the performance issue.
> 
> One easy way to check that this is indeed the case: Enable fake NUMA. You 
> will then have separate queues for each processor since they are on 
> different "nodes". Create two fake nodes. Run one thread in each node and 
> see if this fixes it.

I tried with fake NUMA (boot with numa=fake=2) and use

numactl --physcpubind=1 --membind=0 ./netserver
numactl --physcpubind=2 --membind=1 ./netperf -t TCP_STREAM -l 60 -H
127.0.0.1 -i 5,5 -I 99,5 -- -s 57344 -S 57344 -m 4096

to run the tests.  The results are about the same as the non-NUMA case,
with slab about 5% better than slub.  

So probably the difference is due to some other reasons than partial
slab.  The kernel config file is attached.

Tim





View attachment "config-numa-slub" of type "text/plain" (25422 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ