lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160210132818.589451dbb5eafae3fdb4a7ec@linux-foundation.org>
Date:	Wed, 10 Feb 2016 13:28:18 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Tim Chen <tim.c.chen@...ux.intel.com>
Cc:	Andrey Ryabinin <aryabinin@...tuozzo.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
	Mel Gorman <mgorman@...hsingularity.net>,
	Vladimir Davydov <vdavydov@...tuozzo.com>,
	Konstantin Khlebnikov <koct9i@...il.com>
Subject: Re: [RFC PATCH 3/3] mm: increase scalability of global memory
 commitment accounting

On Wed, 10 Feb 2016 10:00:53 -0800 Tim Chen <tim.c.chen@...ux.intel.com> wrote:

> On Wed, 2016-02-10 at 17:52 +0300, Andrey Ryabinin wrote:
> > Currently we use percpu_counter for accounting committed memory. Change
> > of committed memory on more than vm_committed_as_batch pages leads to
> > grab of counter's spinlock. The batch size is quite small - from 32 pages
> > up to 0.4% of the memory/cpu (usually several MBs even on large machines).
> > 
> > So map/munmap of several MBs anonymous memory in multiple processes leads
> > to high contention on that spinlock.
> > 
> > Instead of percpu_counter we could use ordinary per-cpu variables.
> > Dump test case (8-proccesses running map/munmap of 4MB,
> > vm_committed_as_batch = 2MB on test setup) showed 2.5x performance
> > improvement.
> > 
> > The downside of this approach is slowdown of vm_memory_committed().
> > However, it doesn't matter much since it usually is not in a hot path.
> > The only exception is __vm_enough_memory() with overcommit set to
> > OVERCOMMIT_NEVER. In that case brk1 test from will-it-scale benchmark
> > shows 1.1x - 1.3x performance regression.
> > 
> > So I think it's a good tradeoff. We've got significantly increased
> > scalability for the price of some overhead in vm_memory_committed().
> 
> It is a trade off between the counter read speed vs the counter update
> speed.  With this change the reading of the counter is slower
> because we need to sum over all the cpus each time we need the counter
> value.  So this read overhead will grow with the number of cpus and may
> not be a good tradeoff for that case.
> 
> Wonder if you have tried to tweak the batch size of per cpu counter
> and make it a little larger?

If a process is unmapping 4MB then it's pretty crazy for us to be
hitting the percpu_counter 32 separate times for that single operation.

Is there some way in which we can batch up the modifications within the
caller and update the counter less frequently?  Perhaps even in a
single hit?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ