[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200515090821.GO29153@dhcp22.suse.cz>
Date: Fri, 15 May 2020 11:08:21 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Feng Tang <feng.tang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Matthew Wilcox <willy@...radead.org>,
Mel Gorman <mgorman@...e.de>,
Kees Cook <keescook@...omium.org>,
Luis Chamberlain <mcgrof@...nel.org>,
Iurii Zaikin <yzaikin@...gle.com>,
"Kleen, Andi" <andi.kleen@...el.com>,
"Chen, Tim C" <tim.c.chen@...el.com>,
"Hansen, Dave" <dave.hansen@...el.com>,
"Huang, Ying" <ying.huang@...el.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] mm: adjust vm_committed_as_batch according to vm
overcommit policy
On Fri 15-05-20 16:02:10, Feng Tang wrote:
> Hi Michal,
>
> Thanks for the thorough reviews for these 3 patches!
>
> On Fri, May 15, 2020 at 03:41:25PM +0800, Michal Hocko wrote:
> > On Fri 08-05-20 15:25:17, Feng Tang wrote:
> > > When checking a performance change for will-it-scale scalability
> > > mmap test [1], we found very high lock contention for spinlock of
> > > percpu counter 'vm_committed_as':
> > >
> > > 94.14% 0.35% [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> > > 48.21% _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap;
> > > 45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap;
> > >
> > > Actually this heavy lock contention is not always necessary. The
> > > 'vm_committed_as' needs to be very precise when the strict
> > > OVERCOMMIT_NEVER policy is set, which requires a rather small batch
> > > number for the percpu counter.
> > >
> > > So lift the batch number to 16X for OVERCOMMIT_ALWAYS and
> > > OVERCOMMIT_GUESS policies, and add a sysctl handler to adjust it
> > > when the policy is reconfigured.
> >
> > Increasing the batch size for weaker overcommit modes makes sense. But
> > your patch is also tuning OVERCOMMIT_NEVER without any explanation why
> > that is still "small enough to be precise".
>
> Actually, it keeps the batch algorithm for "OVERCOMMIT_NEVER", but
> change the other 2 policies, which I should set it clear in the
> commit log.
Yeah, I have misread that part. Sorry about that.
[...]
> > > +void mm_compute_batch(void)
> > > {
> > > u64 memsized_batch;
> > > s32 nr = num_present_cpus();
> > > s32 batch = max_t(s32, nr*2, 32);
> > > -
> > > - /* batch size set to 0.4% of (total memory/#cpus), or max int32 */
> > > - memsized_batch = min_t(u64, (totalram_pages()/nr)/256, 0x7fffffff);
> > > + unsigned long ram_pages = totalram_pages();
> > > +
> > > + /*
> > > + * For policy of OVERCOMMIT_NEVER, set batch size to 0.4%
> > > + * of (total memory/#cpus), and lift it to 6.25% for other
> > > + * policies to easy the possible lock contention for percpu_counter
> > > + * vm_committed_as, while the max limit is INT_MAX
> > > + */
> > > + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER)
> > > + memsized_batch = min_t(u64, ram_pages/nr/256, INT_MAX);
> > > + else
> > > + memsized_batch = min_t(u64, ram_pages/nr/16, INT_MAX);
>
> Also as you mentioned there are real-world work loads with big mmap
> size and multi-threading, can we lift it even further ?
> memsized_batch = min_t(u64, ram_pages/nr/4, INT_MAX)
Try to measure those and see what numbers look like.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists