[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1262591604.4375.4075.camel@twins>
Date: Mon, 04 Jan 2010 08:53:23 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: paulmck@...ux.vnet.ibm.com
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"minchan.kim@...il.com" <minchan.kim@...il.com>,
cl@...ux-foundation.org,
"hugh.dickins" <hugh.dickins@...cali.co.uk>,
Nick Piggin <nickpiggin@...oo.com.au>,
Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [RFC PATCH] asynchronous page fault.
On Sun, 2010-01-03 at 19:02 -0800, Paul E. McKenney wrote:
> It would not be all that hard for me to make a call_srcu(), but...
>
> 1. How are you avoiding OOM by SRCU callback? (I am sure you
> have this worked out, but I do have to ask!)
Well, I was thinking srcu to have this force quiescent state in
call_srcu() much like you did for the preemptible rcu.
Alternatively we could actively throttle the call_srcu() call when we've
got too much pending work.
> 2. How many srcu_struct data structures are you envisioning?
> One globally? One per process? One per struct vma?
> (Not necessary to know this for call_srcu(), but will be needed
> as I work out how to make SRCU scale with large numbers of CPUs.)
For this patch in particular, one global one, covering all vmas.
One reason to keep the vma RCU domain separate from other RCU objects is
that these VMA thingies can have rather long quiescent periods due to
this sleep stuff. So mixing that in with other RCU users which have much
better defined periods will just degrade everything bringing that OOM
scenario much closer.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists