[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200527014647.GB93879@shbuild999.sh.intel.com>
Date: Wed, 27 May 2020 09:46:47 +0800
From: Feng Tang <feng.tang@...el.com>
To: Qian Cai <cai@....pw>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Matthew Wilcox <willy@...radead.org>,
Mel Gorman <mgorman@...e.de>,
Kees Cook <keescook@...omium.org>,
Luis Chamberlain <mcgrof@...nel.org>,
Iurii Zaikin <yzaikin@...gle.com>, andi.kleen@...el.com,
tim.c.chen@...el.com, dave.hansen@...el.com, ying.huang@...el.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3] make vm_committed_as_batch aware of vm overcommit
policy
Hi Qian,
On Tue, May 26, 2020 at 02:14:59PM -0400, Qian Cai wrote:
> On Thu, May 21, 2020 at 05:27:26PM -0400, Qian Cai wrote:
> > On Fri, May 08, 2020 at 03:25:14PM +0800, Feng Tang wrote:
> > > When checking a performance change for will-it-scale scalability
> > > mmap test [1], we found very high lock contention for spinlock of
> > > percpu counter 'vm_committed_as':
> > >
> > > 94.14% 0.35% [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> > > 48.21% _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap;
> > > 45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap;
> > >
> > > Actually this heavy lock contention is not always necessary. The
> > > 'vm_committed_as' needs to be very precise when the strict
> > > OVERCOMMIT_NEVER policy is set, which requires a rather small batch
> > > number for the percpu counter.
> > >
> > > So lift the batch number to 16X for OVERCOMMIT_ALWAYS and
> > > OVERCOMMIT_GUESS policies, and add a sysctl handler to adjust it
> > > when the policy is reconfigured.
> > >
> > > Benchmark with the same testcase in [1] shows 53% improvement on a
> > > 8C/16T desktop, and 2097%(20X) on a 4S/72C/144T server. And no change
> > > for some platforms, due to the test mmap size of the case is bigger
> > > than the batch number computed, though the patch will help mmap/munmap
> > > generally.
> > >
> > > [1] https://lkml.org/lkml/2020/3/5/57
> >
> > Reverted this series fixed a warning under memory pressue.
>
> Andrew, Stephen, can you drop this series?
>
> >
> > [ 3319.257898] LTP: starting oom01
> > [ 3319.284417] ------------[ cut here ]------------
> > [ 3319.284439] memory commitment underflow
Thanks for the catch!
Could you share the info about the platform, like the CPU numbers
and RAM size, and what's the mmap test size of your test program.
It would be great if you can point me the link to the test program.
Thanks,
Feng
> > [ 3319.284456] WARNING: CPU: 13 PID: 130949 at mm/util.c:835 __vm_enough_memory+0x1dc/0x1f0
> > [ 3319.284485] Modules linked in: brd ext4 crc16 mbcache jbd2 loop kvm_hv kvm ip_tables x_tables xfs sd_mod ahci libahci bnx2x libata mdio tg3 libphy firmware_class dm_mirror dm_region_hash dm_log dm_mod
> > [ 3319.284538] CPU: 13 PID: 130949 Comm: oom01 Not tainted 5.7.0-rc6-next-20200521+ #115
> > [ 3319.284551] NIP: c00000000043027c LR: c000000000430278 CTR: c0000000007bd120
> > [ 3319.284572] REGS: c00000064a42f710 TRAP: 0700 Not tainted (5.7.0-rc6-next-20200521+)
> > [ 3319.284602] MSR: 900000000282b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 28222422 XER: 20040000
> > [ 3319.284638] CFAR: c000000000119098 IRQMASK: 0
> > GPR00: c000000000430278 c00000064a42f9a0 c000000001765500 000000000000001b
> > GPR04: c00000000179c3b0 000000000000000b 0000000066726564 c00000000429ac28
> > GPR08: 0000000000000001 0000000000000000 0000000000000000 c000001301facc80
> > GPR12: 0000000000002000 c000001fffff3d80 0000000000000000 0000000000000000
> > GPR16: 0000000000000000 0000000000000000 c0000014f358dd60 fffffffffff7dfff
> > GPR20: c0000003725a9128 c0000003725a3928 c000000b0a0c0670 c000000b0a0c0678
> > GPR24: c000000b0a0c0660 c00000064a42fa58 c000000b0a0c0650 c0000003725a9080
> > GPR28: 0000000000000001 0000000000000000 c000000001620178 0000000000000001
> > [ 3319.284825] NIP [c00000000043027c] __vm_enough_memory+0x1dc/0x1f0
> > [ 3319.284853] LR [c000000000430278] __vm_enough_memory+0x1d8/0x1f0
> > [ 3319.284872] Call Trace:
> > [ 3319.284889] [c00000064a42f9a0] [c000000000430278] __vm_enough_memory+0x1d8/0x1f0 (unreliable)
> > [ 3319.284930] [c00000064a42fa30] [c000000000114258] dup_mm+0x2a8/0x700
> > [ 3319.284960] [c00000064a42fae0] [c000000000115c7c] copy_process+0xeac/0x1a00
> > [ 3319.284981] [c00000064a42fbf0] [c000000000116b3c] _do_fork+0xac/0xce0
> > [ 3319.285002] [c00000064a42fd00] [c000000000117928] __do_sys_clone+0x98/0xe0
> > [ 3319.285032] [c00000064a42fdc0] [c000000000039e78] system_call_exception+0x108/0x1d0
> > [ 3319.285064] [c00000064a42fe20] [c00000000000c9f0] system_call_common+0xf0/0x278
> > [ 3319.285093] Instruction dump:
> > [ 3319.285109] 60000000 73a90001 4182000c e8be0002 4bfffeb8 3c62ff51 39200001 3d42fff9
> > [ 3319.285142] 38636690 992a6ad6 4bce8dbd 60000000 <0fe00000> e8be0002 4bfffe90 60000000
Powered by blists - more mailing lists