[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140819173615.GA8019@localhost>
Date: Wed, 20 Aug 2014 01:36:15 +0800
From: Fengguang Wu <fengguang.wu@...el.com>
To: Mel Gorman <mgorman@...e.de>
Cc: LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: Re: [mm] f7b5d647946: -3.0% dbench.throughput-MB/sec
On Tue, Aug 19, 2014 at 05:12:58PM +0100, Mel Gorman wrote:
> On Tue, Aug 19, 2014 at 11:43:51PM +0800, Fengguang Wu wrote:
> > On Tue, Aug 19, 2014 at 03:34:28PM +0100, Mel Gorman wrote:
> > > On Tue, Aug 19, 2014 at 12:41:34PM +0800, Fengguang Wu wrote:
> > > > Hi Mel,
> > > >
> > > > We noticed a minor dbench throughput regression on commit
> > > > f7b5d647946aae1647bf5cd26c16b3a793c1ac49 ("mm: page_alloc: abort fair
> > > > zone allocation policy when remotes nodes are encountered").
> > > >
> > > > testcase: ivb44/dbench/100%
> > > >
> > > > bb0b6dffa2ccfbd f7b5d647946aae1647bf5cd26
> > > > --------------- -------------------------
> > > > 25692 ± 0% -3.0% 24913 ± 0% dbench.throughput-MB/sec
> > > > 6974259 ± 6% -12.1% 6127616 ± 0% meminfo.DirectMap2M
> > > > 18.43 ± 0% -4.6% 17.59 ± 0% turbostat.RAM_W
> > > > 9302 ± 0% -3.6% 8965 ± 1% time.user_time
> > > > 1425791 ± 1% -2.0% 1396598 ± 0% time.involuntary_context_switches
> > > >
> > > > Disclaimer:
> > > > Results have been estimated based on internal Intel analysis and are provided
> > > > for informational purposes only. Any difference in system hardware or software
> > > > design or configuration may affect actual performance.
> > > >
> > >
> > > DirectMap2M changing is a major surprise and doesn't make sense for this
> > > machine.
> >
> > The ivb44's hardware configuration is
> >
> > model: Ivytown Ivy Bridge-EP
> > nr_cpu: 48
> > memory: 64G
> >
> > And note that this is an in-memory dbench run, which is why
> > dbench.throughput-MB/sec is so high.
> >
> > > Did the amount of memory in the machine change between two tests?
> >
> > Nope. They are back-to-back test runs, so the environment pretty much
> > remains the same.
>
> Then how did directmap2m change? The sum of the direct maps should
> correspond to the amount of physical memory and this patch has nothing
> to do with any memory initialisation paths that might affect this.
Good question. Not sure for the moment, but it looks that even the
multiple boots for the same kernel bb0b6dffa2ccfbd have different
DirectMap2M. And it only happen for kernel bb0b6dffa2ccfbd.
f7b5d64794 remains stable for all boots.
bb0b6dffa2ccfbd f7b5d647946aae1647bf5cd26
--------------- -------------------------
%stddev %change %stddev
\ | /
6974259 ± 6% -12.1% 6127616 ± 0% meminfo.DirectMap2M
Looking at the concrete numbers for each boot, the DirectMap2M changes
each time.
"meminfo.DirectMap2M": [
7182336,
7190528,
7178240,
6131712,
7188480
],
The MemTotal does remain stable for this kernel:
"meminfo.MemTotal": [
65869268,
65869268,
65869268,
65869268,
65869268
],
Attached are the full stats for the 2 kernels.
Thanks,
Fengguang
Download attachment "f7b5d6-matrix.json" of type "application/json" (227423 bytes)
Download attachment "bb0b6d-matrix.json" of type "application/json" (229756 bytes)
Powered by blists - more mailing lists