[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200321213142.597e23af955de653fc4db7a1@linux-foundation.org>
Date: Sat, 21 Mar 2020 21:31:42 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Rafael Aquini <aquini@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
shuah@...nel.org, shakeelb@...gle.com
Subject: Re: [PATCH] tools/testing/selftests/vm/mlock2-tests: fix mlock2
false-negative errors
On Sat, 21 Mar 2020 22:03:26 -0400 Rafael Aquini <aquini@...hat.com> wrote:
> > > + * In order to sort out that race, and get the after fault checks consistent,
> > > + * the "quick and dirty" trick below is required in order to force a call to
> > > + * lru_add_drain_all() to get the recently MLOCK_ONFAULT pages moved to
> > > + * the unevictable LRU, as expected by the checks in this selftest.
> > > + */
> > > +static void force_lru_add_drain_all(void)
> > > +{
> > > + sched_yield();
> > > + system("echo 1 > /proc/sys/vm/compact_memory");
> > > +}
> >
> > What is the sched_yield() for?
> >
>
> Mostly it's there to provide a sleeping gap after the fault, whithout
> actually adding an arbitrary value with usleep().
>
> It's not a hard requirement, but, in some of the tests I performed
> (whithout that sleeping gap) I would still see around 1% chance
> of hitting the false-negative. After adding it I could not hit
> the issue anymore.
It's concerning that such deep machinery as pagevec draining is visible
to userspace.
I suppose that for consistency and correctness we should perform a
drain prior to each read from /proc/*/pagemap. Presumably this would
be far too expensive.
Is there any other way? One such might be to make the MLOCK_ONFAULT
pages bypass the lru_add_pvecs?
Powered by blists - more mailing lists