[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1315445674.29510.74.camel@sli10-conroe>
Date: Thu, 08 Sep 2011 09:34:34 +0800
From: Shaohua Li <shaohua.li@...el.com>
To: "Shi, Alex" <alex.shi@...el.com>
Cc: Christoph Lameter <cl@...ux.com>,
"penberg@...nel.org" <penberg@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Huang, Ying" <ying.huang@...el.com>,
"Chen, Tim C" <tim.c.chen@...el.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: RE: [PATCH] slub Discard slab page only when node partials >
minimum setting
On Thu, 2011-09-08 at 08:43 +0800, Shi, Alex wrote:
> On Wed, 2011-09-07 at 23:05 +0800, Christoph Lameter wrote:
> > On Wed, 7 Sep 2011, Shi, Alex wrote:
> >
> > > Oh, seems the deactivate_slab() corrected at linus' tree already, but
> > > the unfreeze_partials() just copied from the old version
> > > deactivate_slab().
> >
> > Ok then the patch is ok.
> >
> > Do you also have performance measurements? I am a bit hesitant to merge
> > the per cpu partials patchset if there are regressions in the low
> > concurrency tests as seem to be indicated by intels latest tests.
> >
>
> My LKP testing system most focus on server platforms. I tested your per
> cpu partial set on hackbench and netperf loopback benchmark. hackbench
> improve much.
>
> Maybe some IO testing is low concurrency for SLUB, maybe a few jobs
> kbuild? or low swap press testing. I may try them for your patchset in
> the near days.
>
> BTW, some testing results for your PCP SLUB:
>
> for hackbench process testing:
> on WSM-EP, inc ~60%, NHM-EP inc ~25%
> on NHM-EX, inc ~200%, core2-EP, inc ~250%.
> on Tigerton-EX, inc 1900%, :)
>
> for hackbench thread testing:
> on WSM-EP, no clear inc, NHM-EP no clear inc
> on NHM-EX, inc 10%, core2-EP, inc ~20%.
> on Tigertion-EX, inc 100%,
>
> for netperf loopback testing, no clear performance change.
did you add my patch to add page to partial list tail in the test?
Without it the per-cpu partial list can have more significant impact to
reduce lock contention, so the result isn't precise.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists