lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 23 Mar 2020 08:52:08 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Shakeel Butt <shakeelb@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Rafael Aquini <aquini@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-kselftest@...r.kernel.org, shuah@...nel.org
Subject: Re: [PATCH] tools/testing/selftests/vm/mlock2-tests: fix mlock2
 false-negative errors

On Sun 22-03-20 09:36:49, Shakeel Butt wrote:
> On Sat, Mar 21, 2020 at 9:31 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
> >
> > On Sat, 21 Mar 2020 22:03:26 -0400 Rafael Aquini <aquini@...hat.com> wrote:
> >
> > > > > + * In order to sort out that race, and get the after fault checks consistent,
> > > > > + * the "quick and dirty" trick below is required in order to force a call to
> > > > > + * lru_add_drain_all() to get the recently MLOCK_ONFAULT pages moved to
> > > > > + * the unevictable LRU, as expected by the checks in this selftest.
> > > > > + */
> > > > > +static void force_lru_add_drain_all(void)
> > > > > +{
> > > > > + sched_yield();
> > > > > + system("echo 1 > /proc/sys/vm/compact_memory");
> > > > > +}
> > > >
> > > > What is the sched_yield() for?
> > > >
> > >
> > > Mostly it's there to provide a sleeping gap after the fault, whithout
> > > actually adding an arbitrary value with usleep().
> > >
> > > It's not a hard requirement, but, in some of the tests I performed
> > > (whithout that sleeping gap) I would still see around 1% chance
> > > of hitting the false-negative. After adding it I could not hit
> > > the issue anymore.
> >
> > It's concerning that such deep machinery as pagevec draining is visible
> > to userspace.
> >
> 
> We already have other examples like memcg stats where the
> optimizations like batching per-cpu stats collection exposes
> differences to the userspace. I would not be that worried here.

Agreed! Tests should be more tolerant for counters imprecision.
Unevictable LRU is an optimization and transition to that list is a
matter of an internal implementation detail.
 
> > I suppose that for consistency and correctness we should perform a
> > drain prior to each read from /proc/*/pagemap.  Presumably this would
> > be far too expensive.
> >
> > Is there any other way?  One such might be to make the MLOCK_ONFAULT
> > pages bypass the lru_add_pvecs?
> >
> 
> I would rather prefer to have something similar to
> /proc/sys/vm/stat_refresh which drains the pagevecs.

No, please don't. Pagevecs draining is by far not the only batching
scheme we use and an interface like this would promise users to
effectivelly force flushing all of them.

Can we simply update the test to be more tolerant to imprecisions
instead?

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists