[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1001042052210.3630@localhost.localdomain>
Date: Mon, 4 Jan 2010 21:10:29 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
cc: Minchan Kim <minchan.kim@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, cl@...ux-foundation.org,
"hugh.dickins" <hugh.dickins@...cali.co.uk>,
Nick Piggin <nickpiggin@...oo.com.au>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault()
On Tue, 5 Jan 2010, KAMEZAWA Hiroyuki wrote:
>
> Then, my patch dropped speculative trial of page fault and did synchronous
> job here. I'm still considering how to insert some barrier to delay calling
> remove_vma() until all page fault goes. One idea was reference count but
> it was said not-enough crazy.
What lock would you use to protect the vma lookup (in order to then
increase the refcount)? A sequence lock with RCU lookup of the vma?
Sounds doable. But it also sounds way more expensive than the current VM
fault handling, which is pretty close to optimal for single-threaded
cases.. That RCU lookup might be cheap, but just the refcount is generally
going to be as expensive as a lock.
Are there some particular mappings that people care about more than
others? If we limit the speculative lookup purely to anonymous memory,
that might simplify the problem space?
[ From past experiences, I suspect DB people would be upset and really
want it for the general file mapping case.. But maybe the main usage
scenario is something else this time? ]
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists