[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1270195589.2078.116.camel@ymzhang.sh.intel.com>
Date: Fri, 02 Apr 2010 16:06:29 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: alex.shi@...el.com,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Ma, Ling" <ling.ma@...el.com>,
"Chen, Tim C" <tim.c.chen@...el.com>, Tim C <tim.c.chen@...el.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: hackbench regression due to commit 9dfc6e68bfe6e
On Thu, 2010-04-01 at 10:53 -0500, Christoph Lameter wrote:
> On Thu, 1 Apr 2010, Zhang, Yanmin wrote:
>
> > I suspect the moving of place of cpu_slab in kmem_cache causes the new cache
> > miss. But when I move it to the tail of the structure, kernel always panic when
> > booting. Perhaps there is another potential bug?
>
> Why would that cause an additional cache miss?
>
>
> The node array is following at the end of the structure. If you want to
> move it down then it needs to be placed before the node field
Thanks. The moving cpu_slab to tail doesn't improve it.
I used perf to collect statistics. Only data cache miss has a little difference.
My testing command on my 2 socket machine:
#hackbench 100 process 20000
With 2.6.33, it takes for about 96 seconds while 2.6.34-rc2 (or the latest tip tree)
takes for about 101 seconds.
perf shows some functions around SLUB have more cpu utilization, while some other
SLUB functions have less cpu utilization.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists