[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20200518153824.e4e57a651c6ca69fb8776dbc@linux-foundation.org>
Date: Mon, 18 May 2020 15:38:24 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Feng Tang <feng.tang@...el.com>
Cc: Michal Hocko <mhocko@...e.com>,
Matthew Wilcox <willy@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
Mel Gorman <mgorman@...e.de>,
Kees Cook <keescook@...omium.org>, andi.kleen@...el.com,
tim.c.chen@...el.com, dave.hansen@...el.com, ying.huang@...el.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 3/3] mm: adjust vm_committed_as_batch according to vm
overcommit policy
On Sat, 16 May 2020 14:47:40 +0800 Feng Tang <feng.tang@...el.com> wrote:
> When checking a performance change for will-it-scale scalability
> mmap test [1], we found very high lock contention for spinlock of
> percpu counter 'vm_committed_as':
>
> 94.14% 0.35% [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 48.21% _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap;
> 45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap;
>
> Actually this heavy lock contention is not always necessary. The
> 'vm_committed_as' needs to be very precise when the strict
> OVERCOMMIT_NEVER policy is set, which requires a rather small batch
> number for the percpu counter.
>
> So keep 'batch' number unchanged for strict OVERCOMMIT_NEVER policy,
> and lift it to 64X for OVERCOMMIT_ALWAYS and OVERCOMMIT_GUESS policies.
> Also add a sysctl handler to adjust it when the policy is reconfigured.
>
> Benchmark with the same testcase in [1] shows 53% improvement on a
> 8C/16T desktop, and 2097%(20X) on a 4S/72C/144T server. We tested
> with test platforms in 0day (server, desktop and laptop), and 80%+
> platforms shows improvements with that test. And whether it shows
> improvements depends on if the test mmap size is bigger than the
> batch number computed.
>
> And if the lift is 16X, 1/3 of the platforms will show improvements,
> though it should help the mmap/unmap usage generally, as Michal Hocko
> mentioned:
> "
> I believe that there are non-synthetic worklaods which would benefit
> from a larger batch. E.g. large in memory databases which do large
> mmaps during startups from multiple threads.
> "
>
This needed some adjustments to overcommit_policy_handler() after
linux-next's 32927393dc1c ("sysctl: pass kernel pointers to
->proc_handler"). Relevant parts are below.
--- a/include/linux/mm.h~mm-adjust-vm_committed_as_batch-according-to-vm-overcommit-policy
+++ a/include/linux/mm.h
@@ -205,6 +205,8 @@ int overcommit_ratio_handler(struct ctl_
loff_t *);
int overcommit_kbytes_handler(struct ctl_table *, int, void *, size_t *,
loff_t *);
+int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *,
+ loff_t *);
#define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
--- a/mm/util.c~mm-adjust-vm_committed_as_batch-according-to-vm-overcommit-policy
+++ a/mm/util.c
@@ -746,6 +746,18 @@ int overcommit_ratio_handler(struct ctl_
return ret;
}
+int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer,
+ size_t *lenp, loff_t *ppos)
+{
+ int ret;
+
+ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ if (ret == 0 && write)
+ mm_compute_batch();
+
+ return ret;
+}
+
int overcommit_kbytes_handler(struct ctl_table *table, int write, void *buffer,
size_t *lenp, loff_t *ppos)
{
_
Powered by blists - more mailing lists