[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1157136106.18728.27.camel@localhost.localdomain>
Date: Fri, 01 Sep 2006 11:41:46 -0700
From: Dave Hansen <haveblue@...ibm.com>
To: schwidefsky@...ibm.com
Cc: Andy Whitcroft <apw@...dowen.org>, linux-kernel@...r.kernel.org,
virtualization@...ts.osdl.org, akpm@...l.org,
nickpiggin@...oo.com.au, frankeh@...son.ibm.com
Subject: Re: [patch 3/9] Guest page hinting: volatile page cache.
On Fri, 2006-09-01 at 20:31 +0200, Martin Schwidefsky wrote:
> On Fri, 2006-09-01 at 11:23 -0700, Dave Hansen wrote:
> > OK, and there's no other workable solution to exclude each other from
> > running at the same time than a bit in page->flags?
> >
> > It seems like that hashed lock (or lock in mem_map[]) we were talking
> > about earlier might be applicable here, too.
>
> The indication which page has already been removed from the page cache
> by a discard fault is by definition per-page.
Right. So having a single bit that was set and cleared wouldn't work
because it could get interpreted incorrectly for multiple pages. But,
what about a lock?
> The situation is different
> compared to the one with PG_state_change which is used to protect
> critical sections. After the cpu left the critical section the bit can
> be clear again. The discard bit cannot be cleared until the page really
> has been freed.
While something like the following wouldn't be scalable, it would
functionally work, right?
+static void __page_discard(struct page *page)
+{
+ spin_lock(discard_lock);
...
+ spin_unlock(discard_lock);
+}
+void __delete_from_swap_cache(struct page *page)
+{
+ spin_lock(discard_lock);
...
+ spin_unlock(discard_lock);
+}
+void __remove_from_page_cache(struct page *page)
+{
+ spin_lock(discard_lock);
...
+ spin_unlock(discard_lock);
+}
-- Dave
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists