[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1425090622.10337.67.camel@linux.intel.com>
Date: Sat, 28 Feb 2015 10:30:22 +0800
From: Huang Ying <ying.huang@...ux.intel.com>
To: Mel Gorman <mgorman@...e.de>
Cc: LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: Re: [LKP] [mm] 3484b2de949: -46.2% aim7.jobs-per-min
On Sat, 2015-02-28 at 01:46 +0000, Mel Gorman wrote:
> On Fri, Feb 27, 2015 at 03:21:36PM +0800, Huang Ying wrote:
> > FYI, we noticed the below changes on
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> > commit 3484b2de9499df23c4604a513b36f96326ae81ad ("mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines")
> >
> > The perf cpu-cycles for spinlock (zone->lock) increased a lot. I suspect there are some cache ping-pong or false sharing.
> >
>
> Are you sure about this result? I ran similar tests here and found that
> there was a major regression introduced near there but it was commit
> 05b843012335 ("mm: memcontrol: use root_mem_cgroup res_counter") that
> cause the problem and it was later reverted. On local tests on a 4-node
> machine, commit 3484b2de9499df23c4604a513b36f96326ae81ad was within 1%
> of the previous commit and well within the noise.
I have double checked the result before sending out.
Do you do the test with same kernel config and test case/parameters
(aim7/page_test/load 6000)?
Best Regards,
Huang, Ying
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists