lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1315448656.31737.252.camel@debian>
Date:	Thu, 08 Sep 2011 10:24:16 +0800
From:	"Alex,Shi" <alex.shi@...el.com>
To:	"Li, Shaohua" <shaohua.li@...el.com>
Cc:	Christoph Lameter <cl@...ux.com>,
	"penberg@...nel.org" <penberg@...nel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Huang, Ying" <ying.huang@...el.com>,
	"Chen, Tim C" <tim.c.chen@...el.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: RE: [PATCH] slub Discard slab page only when node partials >
 minimum setting

On Thu, 2011-09-08 at 09:34 +0800, Li, Shaohua wrote:
> On Thu, 2011-09-08 at 08:43 +0800, Shi, Alex wrote:
> > On Wed, 2011-09-07 at 23:05 +0800, Christoph Lameter wrote:
> > > On Wed, 7 Sep 2011, Shi, Alex wrote:
> > > 
> > > > Oh, seems the deactivate_slab() corrected at linus' tree already, but
> > > > the unfreeze_partials() just copied from the old version
> > > > deactivate_slab().
> > > 
> > > Ok then the patch is ok.
> > > 
> > > Do you also have performance measurements? I am a bit hesitant to merge
> > > the per cpu partials patchset if there are regressions in the low
> > > concurrency tests as seem to be indicated by intels latest tests.
> > > 
> > 
> > My LKP testing system most focus on server platforms. I tested your per
> > cpu partial set on hackbench and netperf loopback benchmark. hackbench
> > improve much.
> > 
> > Maybe some IO testing is low concurrency for SLUB, maybe a few jobs
> > kbuild? or low swap press testing.  I may try them for your patchset in
> > the near days. 
> > 
> > BTW, some testing results for your PCP SLUB:
> > 
> > for hackbench process testing: 
> > on WSM-EP, inc ~60%, NHM-EP inc ~25%
> > on NHM-EX, inc ~200%, core2-EP, inc ~250%. 
> > on Tigerton-EX, inc 1900%, :) 
> > 
> > for hackbench thread testing: 
> > on WSM-EP, no clear inc, NHM-EP no clear inc
> > on NHM-EX, inc 10%, core2-EP, inc ~20%. 
> > on Tigertion-EX, inc 100%, 
> > 
> > for  netperf loopback testing, no clear performance change. 
> did you add my patch to add page to partial list tail in the test?
> Without it the per-cpu partial list can have more significant impact to
> reduce lock contention, so the result isn't precise.
> 

No, the penberg tree did include your patch on slub/partial head.
Actually PCP won't take that path, so, there is no need for your patch.
I daft a patch to remove some unused code in __slab_free, that related
this, and will send it out later.

But, You reminder me that the compare kernel 3.1-rc2 has a bug. so,
compare to 3.0 kernel, on hackbench process testing, the PCP patchset
just have 5~9% performance on our 4 CPU socket, EX machine, while has
about 2~4% drop on 2 socket EP machines.  :) 





--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ