[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1455127253.715.36.camel@schen9-desk2.jf.intel.com>
Date: Wed, 10 Feb 2016 10:00:53 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Vladimir Davydov <vdavydov@...tuozzo.com>,
Konstantin Khlebnikov <koct9i@...il.com>
Subject: Re: [RFC PATCH 3/3] mm: increase scalability of global memory
commitment accounting
On Wed, 2016-02-10 at 17:52 +0300, Andrey Ryabinin wrote:
> Currently we use percpu_counter for accounting committed memory. Change
> of committed memory on more than vm_committed_as_batch pages leads to
> grab of counter's spinlock. The batch size is quite small - from 32 pages
> up to 0.4% of the memory/cpu (usually several MBs even on large machines).
>
> So map/munmap of several MBs anonymous memory in multiple processes leads
> to high contention on that spinlock.
>
> Instead of percpu_counter we could use ordinary per-cpu variables.
> Dump test case (8-proccesses running map/munmap of 4MB,
> vm_committed_as_batch = 2MB on test setup) showed 2.5x performance
> improvement.
>
> The downside of this approach is slowdown of vm_memory_committed().
> However, it doesn't matter much since it usually is not in a hot path.
> The only exception is __vm_enough_memory() with overcommit set to
> OVERCOMMIT_NEVER. In that case brk1 test from will-it-scale benchmark
> shows 1.1x - 1.3x performance regression.
>
> So I think it's a good tradeoff. We've got significantly increased
> scalability for the price of some overhead in vm_memory_committed().
It is a trade off between the counter read speed vs the counter update
speed. With this change the reading of the counter is slower
because we need to sum over all the cpus each time we need the counter
value. So this read overhead will grow with the number of cpus and may
not be a good tradeoff for that case.
Wonder if you have tried to tweak the batch size of per cpu counter
and make it a little larger?
Tim
Powered by blists - more mailing lists