[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200609152335.39960.arnd@arndb.de>
Date: Fri, 15 Sep 2006 23:35:39 +0200
From: Arnd Bergmann <arnd@...db.de>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: linux-mm@...ck.org,
Linux Kernel list <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...l.org>
Subject: Re: [RFC] page fault retry with NOPAGE_RETRY
Am Friday 15 September 2006 00:55 schrieb Benjamin Herrenschmidt:
> Somebody pointed to me that this might also be used to shoot another
> bird, though I have not really though about it and wether it's good or
> bad, which is the old problem of needing struct page for things that can
> be mmap'ed. Using that trick, a driver could do the set_pte() itself in
> the no_page handler and return NOPAGE_RETRY. I'm not sure about
> advertising that feature though as I like all callers of things like
> set_pte() to be in well known locations, as there are various issues
> related to manipulating the page tables that driver writers might not
> get right. Though I suppose that if we consider the approach good, we
> can provide a helper that "does the right thing" as well (like calling
> update_mmu_cache(), flush_tlb_whatever(), etc...).
One more point where it can help: When the backing store for the spufs
mem file changes between vmalloc memory backed and pointing to a physical
spu, we need to change vm_page_prot between (_PAGE_NO_CACHE | _PAGE_GUARDED)
and the opposite. While all my investigations (with help from Hugh Dickins
and Christoph Hellwig) show that it should be safe to do in the current
code, the idea is still scary. When the nopage function for that file
can simply return NOPAGE_RETRY after setting up the page tables,
we don't need to worry about vm_page_prot any more.
Arnd <><
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists