[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200603133837.GL7533@dhcp22.suse.cz>
Date: Wed, 3 Jun 2020 15:38:37 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Feng Tang <feng.tang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Matthew Wilcox <willy@...radead.org>,
Mel Gorman <mgorman@...e.de>,
Kees Cook <keescook@...omium.org>, Qian Cai <cai@....pw>,
andi.kleen@...el.com, tim.c.chen@...el.com, dave.hansen@...el.com,
ying.huang@...el.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 4/4] mm: adjust vm_committed_as_batch according to vm
overcommit policy
On Fri 29-05-20 09:06:10, Feng Tang wrote:
> When checking a performance change for will-it-scale scalability mmap test
> [1], we found very high lock contention for spinlock of percpu counter
> 'vm_committed_as':
>
> 94.14% 0.35% [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 48.21% _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap;
> 45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap;
>
> Actually this heavy lock contention is not always necessary. The
> 'vm_committed_as' needs to be very precise when the strict
> OVERCOMMIT_NEVER policy is set, which requires a rather small batch number
> for the percpu counter.
>
> So keep 'batch' number unchanged for strict OVERCOMMIT_NEVER policy, and
> lift it to 64X for OVERCOMMIT_ALWAYS and OVERCOMMIT_GUESS policies. Also
> add a sysctl handler to adjust it when the policy is reconfigured.
>
> Benchmark with the same testcase in [1] shows 53% improvement on a 8C/16T
> desktop, and 2097%(20X) on a 4S/72C/144T server. We tested with test
> platforms in 0day (server, desktop and laptop), and 80%+ platforms shows
> improvements with that test. And whether it shows improvements depends on
> if the test mmap size is bigger than the batch number computed.
>
> And if the lift is 16X, 1/3 of the platforms will show improvements,
> though it should help the mmap/unmap usage generally, as Michal Hocko
> mentioned:
>
> : I believe that there are non-synthetic worklaods which would benefit from
> : a larger batch. E.g. large in memory databases which do large mmaps
> : during startups from multiple threads.
>
> [1] https://lore.kernel.org/lkml/20200305062138.GI5972@shao2-debian/
>
> Link: http://lkml.kernel.org/r/1589611660-89854-4-git-send-email-feng.tang@intel.com
> Signed-off-by: Feng Tang <feng.tang@...el.com>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Mel Gorman <mgorman@...e.de>
> Cc: Kees Cook <keescook@...omium.org>
> Cc: Andi Kleen <andi.kleen@...el.com>
> Cc: Tim Chen <tim.c.chen@...el.com>
> Cc: Dave Hansen <dave.hansen@...el.com>
> Cc: Huang Ying <ying.huang@...el.com>
> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Acked-by: Michal Hocko <mhocko@...e.com>
> ---
> include/linux/mm.h | 2 ++
> include/linux/mman.h | 4 ++++
> kernel/sysctl.c | 2 +-
> mm/mm_init.c | 18 ++++++++++++++----
> mm/util.c | 12 ++++++++++++
> 5 files changed, 33 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 573947c..c2efea6 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -206,6 +206,8 @@ int overcommit_ratio_handler(struct ctl_table *, int, void *, size_t *,
> loff_t *);
> int overcommit_kbytes_handler(struct ctl_table *, int, void *, size_t *,
> loff_t *);
> +int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *,
> + loff_t *);
>
> #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
>
> diff --git a/include/linux/mman.h b/include/linux/mman.h
> index 4b08e9c..91c93c1 100644
> --- a/include/linux/mman.h
> +++ b/include/linux/mman.h
> @@ -57,8 +57,12 @@ extern struct percpu_counter vm_committed_as;
>
> #ifdef CONFIG_SMP
> extern s32 vm_committed_as_batch;
> +extern void mm_compute_batch(void);
> #else
> #define vm_committed_as_batch 0
> +static inline void mm_compute_batch(void)
> +{
> +}
> #endif
>
> unsigned long vm_memory_committed(void);
> diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> index db1ce7a..9456c86 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -2650,7 +2650,7 @@ static struct ctl_table vm_table[] = {
> .data = &sysctl_overcommit_memory,
> .maxlen = sizeof(sysctl_overcommit_memory),
> .mode = 0644,
> - .proc_handler = proc_dointvec_minmax,
> + .proc_handler = overcommit_policy_handler,
> .extra1 = SYSCTL_ZERO,
> .extra2 = &two,
> },
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 435e5f7..c5a6fb1 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -13,6 +13,7 @@
> #include <linux/memory.h>
> #include <linux/notifier.h>
> #include <linux/sched.h>
> +#include <linux/mman.h>
> #include "internal.h"
>
> #ifdef CONFIG_DEBUG_MEMORY_INIT
> @@ -144,14 +145,23 @@ EXPORT_SYMBOL_GPL(mm_kobj);
> #ifdef CONFIG_SMP
> s32 vm_committed_as_batch = 32;
>
> -static void __meminit mm_compute_batch(void)
> +void mm_compute_batch(void)
> {
> u64 memsized_batch;
> s32 nr = num_present_cpus();
> s32 batch = max_t(s32, nr*2, 32);
> -
> - /* batch size set to 0.4% of (total memory/#cpus), or max int32 */
> - memsized_batch = min_t(u64, (totalram_pages()/nr)/256, 0x7fffffff);
> + unsigned long ram_pages = totalram_pages();
> +
> + /*
> + * For policy of OVERCOMMIT_NEVER, set batch size to 0.4%
> + * of (total memory/#cpus), and lift it to 25% for other
> + * policies to easy the possible lock contention for percpu_counter
> + * vm_committed_as, while the max limit is INT_MAX
> + */
> + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER)
> + memsized_batch = min_t(u64, ram_pages/nr/256, INT_MAX);
> + else
> + memsized_batch = min_t(u64, ram_pages/nr/4, INT_MAX);
>
> vm_committed_as_batch = max_t(s32, memsized_batch, batch);
> }
> diff --git a/mm/util.c b/mm/util.c
> index fe63271..580d268 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -746,6 +746,18 @@ int overcommit_ratio_handler(struct ctl_table *table, int write, void *buffer,
> return ret;
> }
>
> +int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer,
> + size_t *lenp, loff_t *ppos)
> +{
> + int ret;
> +
> + ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
> + if (ret == 0 && write)
> + mm_compute_batch();
> +
> + return ret;
> +}
> +
> int overcommit_kbytes_handler(struct ctl_table *table, int write, void *buffer,
> size_t *lenp, loff_t *ppos)
> {
> --
> 2.7.4
>
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists