[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45285E9A.9070009@yahoo.com.au>
Date: Sun, 08 Oct 2006 12:12:42 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Andrew Morton <akpm@...l.org>
CC: Nick Piggin <npiggin@...e.de>,
Linux Memory Management <linux-mm@...ck.org>,
Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [patch 3/3] mm: fault handler to replace nopage and populate
Andrew Morton wrote:
>- You may find that gcc generates crap code for the initialisation of the
> `struct fault_data'. If so, filling the fields in by hand one-at-a-time
> will improve things.
>
OK.
>- So is the plan here to migrate all code over to using
> vm_operations.fault() and to finally remove vm_operations.nopage and
> .nopfn? If so, that'd be nice.
>
Definitely remove .nopage, .populate, and hopefully .page_mkwrite.
.nopfn is a little harder because it doesn't quite follow the same pattern
as the others (eg. has no struct page).
>- As you know, there is a case for constructing that `struct fault_data'
> all the way up in do_no_page(): so we can pass data back, asking
> do_no_page() to rerun the fault if we dropped mmap_sem.
>
That is what it is doing - do_no_page should go away (it is basically
duplicated in __do_fault -- I left it there because I don't know if people
are happy to have a flag day or slowly migrate over).
But I have converted regular pagecache (mm/filemap.c) to use .fault rather
than .nopage and .populate, so you should be able to do the mmap_sem thing
right now. That's something maybe you could look at if you get time? Ie.
whether this .fault handler thing will be sufficient for you.
>- No useful opinion on the substance of this patch, sorry. It's Saturday ;)
>
No hurry. Thanks for the quick initial comments.
--
Send instant messages to your online friends http://au.messenger.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists