[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070127143013.e2c839c0.akpm@osdl.org>
Date: Sat, 27 Jan 2007 14:30:13 -0800
From: Andrew Morton <akpm@...l.org>
To: Rik van Riel <riel@...hat.com>
Cc: Christoph Lameter <clameter@....com>,
Nick Piggin <nickpiggin@...oo.com.au>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC] Track mlock()ed pages
On Sat, 27 Jan 2007 17:19:21 -0500
Rik van Riel <riel@...hat.com> wrote:
> Andrew Morton wrote:
>
> > Of course it would. But how do you know it is "too expensive"? We "scan
> > all the vmas mapping a page" as a matter of course in the page scanner -
> > millions of times a minute. If that's "too expensive" then ouch.
>
> We can do it lazily.
>
> At mlock time, move pages onto the mlocked list, unless they
> are there already.
Needs another page flag to determine what list the page is on (eek).
> On munlock, move pages to the active list.
We'd need to determine whether some other vma has mlocked the page too.
That's either the page_struct refcount or the vma walk. The latter is
equivalent to what I'm suggesting.
> For mlock-only
> memory (shared memory segments?) we could add a simple check
> to see if the next process on the list has the page mlocked,
> checking only that one.
>
> While scanning the active list, move mlocked pages that are
> found back onto the mlocked list.
>
> This lazy movement of pages will impact shared libraries,
> but probably not shared memory segments.
>
> Does this sound workable?
I'm still not sure what problem we're trying to solve here.
Knowing how many mlocked pages there are in a zone doesn't sound terribly
interesting and I don't recall ever wanting to know that.
Being able to keep mlocked pages off the LRU altogether sounds more useful.
It's all rather a tight corner case - people don't use mlock much.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists