[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6E3BC7F7C9A4BF4286DD4C043110F30B5FD97584A3@shsmsx502.ccr.corp.intel.com>
Date: Sun, 2 Oct 2011 20:47:21 +0800
From: "Shi, Alex" <alex.shi@...el.com>
To: Christoph Lameter <cl@...two.org>
CC: Pekka Enberg <penberg@...helsinki.fi>,
"Chen, Tim C" <tim.c.chen@...el.com>,
"Huang, Ying" <ying.huang@...el.com>,
"Huang, Ying" <ying.huang@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: RE: [PATCH] slub Discard slab page only when node partials >
minimum setting
> > I am tested aim9/netperf, both of them was said related to memory
> > allocation, but didn't find performance change with/without PCP. Seems
> > only hackbench sensitive on this. As to aim9, whichever with ourself
> > configuration, or with Mel Gorman's aim9 configuration from his
> > mmtest, both of them has no clear performance change for PCP slub.
>
> AIM9 tests are usually single threaded so I would not expect any differences.
> Try AIM7? And concurrent netperfs?
I used aim7+aim9 patch, and setup 2000 process run concurrently. But aim9
can't have big press on slab in fact.
As to concurrent netperf, I'd like try it after vacation, if you can wait. :)
>
> The PCP patch helps only if there is node lock contention. Meaning
> simultaneous allocations/frees from multiple processor from the same cache.
>
> > Checking the kernel function call graphic via perf record/perf report,
> > slab function only be used much in hackbench benchmark.
>
> Then the question arises if its worthwhile merging if it only affects this
> benchmark.
>
>From my viewpoint, the patch is still helpful on server machines, while no clear
regression finding on desktop machine. So it useful.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists