lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170725142626.GJ26723@dhcp22.suse.cz>
Date:   Tue, 25 Jul 2017 16:26:26 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     "Kirill A. Shutemov" <kirill@...temov.name>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        David Rientjes <rientjes@...gle.com>,
        Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
        Oleg Nesterov <oleg@...hat.com>,
        Hugh Dickins <hughd@...gle.com>,
        Andrea Arcangeli <aarcange@...hat.com>, linux-mm@...ck.org,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm, oom: allow oom reaper to race with exit_mmap

On Mon 24-07-17 18:11:46, Michal Hocko wrote:
> On Mon 24-07-17 17:51:42, Kirill A. Shutemov wrote:
> > On Mon, Jul 24, 2017 at 04:15:26PM +0200, Michal Hocko wrote:
> [...]
> > > What kind of scalability implication you have in mind? There is
> > > basically a zero contention on the mmap_sem that late in the exit path
> > > so this should be pretty much a fast path of the down_write. I agree it
> > > is not 0 cost but the cost of the address space freeing should basically
> > > make it a noise.
> > 
> > Even in fast path case, it adds two atomic operation per-process. If the
> > cache line is not exclusive to the core by the time of exit(2) it can be
> > noticible.
> > 
> > ... but I guess it's not very hot scenario.
> > 
> > I guess I'm just too cautious here. :)
> 
> I definitely did not want to handwave your concern. I just think we can
> rule out the slow path and didn't think about the fast path overhead.
> 
> > > > Should we do performance/scalability evaluation of the patch before
> > > > getting it applied?
> > > 
> > > What kind of test(s) would you be interested in?
> > 
> > Can we at lest check that number of /bin/true we can spawn per second
> > wouldn't be harmed by the patch? ;)
> 
> OK, so measuring a single /bin/true doesn't tell anything so I've done
> root@...t1:~# cat a.sh 
> #!/bin/sh
> 
> NR=$1
> for i in $(seq $NR)
> do
>         /bin/true
> done

I wanted to reduce a potential shell side effects so I've come with a
simple program which forks and saves the timestamp before child exit and
right after waitpid (see attached) and then measured it 100k times. Sure
this still measures waitpid overhead and the signal delivery but this
should be more or less constant on an idle system, right? See attached.

before the patch
min: 306300.00 max: 6731916.00 avg: 437962.07 std: 92898.30 nr: 100000

after
min: 303196.00 max: 5728080.00 avg: 436081.87 std: 96165.98 nr: 100000

The results are well withing noise as I would expect.
-- 
Michal Hocko
SUSE Labs

View attachment "exit_time.c" of type "text/x-csrc" (986 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ