lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1112131734070.8593@chino.kir.corp.google.com>
Date:	Tue, 13 Dec 2011 17:38:43 -0800 (PST)
From:	David Rientjes <rientjes@...gle.com>
To:	"Shi, Alex" <alex.shi@...el.com>
cc:	Christoph Lameter <cl@...ux.com>,
	"penberg@...nel.org" <penberg@...nel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	Eric Dumazet <eric.dumazet@...il.com>
Subject: RE: [PATCH 1/3] slub: set a criteria for slub node partial adding

On Fri, 9 Dec 2011, Shi, Alex wrote:

> Of course any testing may have result variation. But it is benchmark 
> accordingly, and there are lot technical to tuning your testing to make 
> its stand division acceptable, like to sync your system in a clear 
> status, to close unnecessary services, to use separate working disks for 
> your testing etc. etc. For this data, like on my SNB-EP machine, (the 
> following data is not stands for Intel, it is just my personal data). 

I always run benchmarks with freshly booted machines and disabling all but 
the most basic and required userspace for my testing environment, I can 
assure you that my comparison of slab and slub on netperf TCP_RR isn't 
because of any noise from userspace.

> 4 times result of hackbench on this patch are 5.59, 5.475, 5.47833, 
> 5.504

I haven't been running hackbench benchmarks, sorry.  I was always under 
the assumption that slub still was slightly better than slab with 
hackbench since that was used as justification for it becoming the default 
allocator and also because Christoph had patches merged recently which 
improved its performance on slub.  I've been speaking only about my 
history with netperf TCP_RR when using slub.

> > Not sure what you're asking me to test, you would like this:
> > 
> > 	{
> > 	        n->nr_partial++;
> > 	-       if (tail == DEACTIVATE_TO_TAIL)
> > 	-               list_add_tail(&page->lru, &n->partial);
> > 	-       else
> > 	-               list_add(&page->lru, &n->partial);
> > 	+       list_add_tail(&page->lru, &n->partial);
> > 	}
> > 
> > with the statistics patch above?  I typically run with CONFIG_SLUB_STATS
> > disabled since it impacts performance so heavily and I'm not sure what
> > information you're looking for with regards to those stats.
> 
> NO, when you collect data, please close SLUB_STAT in kernel config.  
> _to_head statistics collection patch just tell you, I collected the 
> statistics not include add_partial in early_kmem_cache_node_alloc(). And 
> other places of add_partial were covered. Of course, the kernel with 
> statistic can not be used to measure performance. 
> 

Ok, I'll benchmark netperf TCP_RR comparing Linus' latest -git both with 
and without the above change.  It was confusing because you had three 
diffs in your email, I wasn't sure which or combination of which you 
wanted me to try :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ