lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1213198150.6436.26.camel@lts-notebook>
Date:	Wed, 11 Jun 2008 11:29:10 -0400
From:	Lee Schermerhorn <Lee.Schermerhorn@...com>
To:	Rik van Riel <riel@...hat.com>
Cc:	Nick Piggin <npiggin@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, kosaki.motohiro@...fujitsu.com,
	linux-mm@...ck.org, eric.whitney@...com
Subject: Re: [PATCH -mm 17/25] Mlocked Pages are non-reclaimable

On Tue, 2008-06-10 at 19:48 -0400, Rik van Riel wrote:
> On Tue, 10 Jun 2008 17:43:17 -0400
> Lee Schermerhorn <Lee.Schermerhorn@...com> wrote:
> 
> > On Tue, 2008-06-10 at 17:14 -0400, Rik van Riel wrote:
> > > On Tue, 10 Jun 2008 05:31:30 +0200
> > > Nick Piggin <npiggin@...e.de> wrote:
> > > 
> > > > If we eventually run out of page flags on 32 bit, then sure this might be
> > > > one we could look at geting rid of. Once the code has proven itself.
> > > 
> > > Yes, after the code has proven stable, we can probably get
> > > rid of the PG_mlocked bit and use only PG_unevictable to mark
> > > these pages.
> > > 
> > > Lee, Kosaki-san, do you see any problem with that approach?
> > > Is the PG_mlocked bit really necessary for non-debugging
> > > purposes?
> > 
> > Well, it does speed up the check for mlocked pages in page_reclaimable()
> > [now page_evictable()?] as we don't have to walk the reverse map to
> > determine that a page is mlocked.   In many places where we currently
> > test page_reclaimable(), we really don't want to and maybe can't walk
> > the reverse map.
> 
> There are a few places:
> 1) the pageout code, which calls page_referenced() anyway; we can
>    change page_referenced() to return PAGE_MLOCKED and do the right
>    thing from there

In vmscan, true.  try_to_unmap() will catch it too.  By then, we'll have
let the page ride through the active list to the inactive list and won't
catch it until shrink_page_list().  But, this only happens once per page
and then it's hidden on the nor^H^H^Hunevictable list.

We might want to kill the "cull in fault path" patch, tho'.

> 2) when the page is moved from a per-cpu pagevec onto an LRU list,
>    we may be able to simply skip the check there on the theory that
>    the pagevecs are small and the pageout code will eventually catch
>    these (few?) pages - actually, setting PG_noreclaim on a page
>    that is in a pagevec but not on an LRU list might catch that
> 
> Does that seem reasonable/possible?

Not sure.  The most recent patches that I posted do not use the pagevec
for the noreclaim/unevictable list.  They put nonreclaimable/unevictable
pages directly onto the noreclaim/unevictable list to avoid race
conditions that could strand a page.  Kosaki-san and I spent a lot of
time analyzing and testing the current code for potential page leaks
onto the noreclaim/unevictable list.  It currently depends on the atomic
TestSet/TestClear of the PG_mlocked bit, along with page lock and lru
isolation/putback to resolve all of the potential races.  I attempted to
describe this aspect in the doc.  Have to rethink all of that.


> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ