lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140819191139.GG10146@suse.de>
Date:	Tue, 19 Aug 2014 20:11:51 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Fengguang Wu <fengguang.wu@...el.com>
Cc:	LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: Re: [mm] f7b5d647946: -3.0% dbench.throughput-MB/sec

On Tue, Aug 19, 2014 at 11:43:51PM +0800, Fengguang Wu wrote:
> On Tue, Aug 19, 2014 at 03:34:28PM +0100, Mel Gorman wrote:
> > On Tue, Aug 19, 2014 at 12:41:34PM +0800, Fengguang Wu wrote:
> > > Hi Mel,
> > > 
> > > We noticed a minor dbench throughput regression on commit
> > > f7b5d647946aae1647bf5cd26c16b3a793c1ac49 ("mm: page_alloc: abort fair
> > > zone allocation policy when remotes nodes are encountered").
> > > 
> > > testcase: ivb44/dbench/100%
> > > 
> > > bb0b6dffa2ccfbd  f7b5d647946aae1647bf5cd26
> > > ---------------  -------------------------
> > >      25692 ± 0%      -3.0%      24913 ± 0%  dbench.throughput-MB/sec
> > >    6974259 ± 6%     -12.1%    6127616 ± 0%  meminfo.DirectMap2M
> > >      18.43 ± 0%      -4.6%      17.59 ± 0%  turbostat.RAM_W
> > >       9302 ± 0%      -3.6%       8965 ± 1%  time.user_time
> > >    1425791 ± 1%      -2.0%    1396598 ± 0%  time.involuntary_context_switches
> > > 
> > > Disclaimer:
> > > Results have been estimated based on internal Intel analysis and are provided
> > > for informational purposes only. Any difference in system hardware or software
> > > design or configuration may affect actual performance.
> > > 
> > 
> > DirectMap2M changing is a major surprise and doesn't make sense for this
> > machine.
> 
> The ivb44's hardware configuration is
> 
>         model: Ivytown Ivy Bridge-EP
>         nr_cpu: 48
>         memory: 64G
> 
> And note that this is an in-memory dbench run, which is why
> dbench.throughput-MB/sec is so high.
> 

Ok, it's a NUMA machine. I expect in this case that prior to the patch more
local memory would have been used on node 0 due to the fair zone allocation
policy skipping remote nodes. The patch corrects the behaviour of zonelist
but the downside is more remote accesses for processes running on node 0. The
behaviour is correct although not necessarily desirable from a performance
point of view.  Users should boot with numa_zonelist_order=node if it's
a problem.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ