[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1410241003430.29419@gentwo.org>
Date: Fri, 24 Oct 2014 10:16:24 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: Peter Zijlstra <peterz@...radead.org>
cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
Oleg Nesterov <oleg@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Minchan Kim <minchan@...nel.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Al Viro <viro@...iv.linux.org.uk>,
Lai Jiangshan <laijs@...fujitsu.com>,
Davidlohr Bueso <dave@...olabs.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>
Subject: Re: [RFC][PATCH 4/6] SRCU free VMAs
On Tue, 21 Oct 2014, Peter Zijlstra wrote:
> On Mon, Oct 20, 2014 at 04:41:45PM -0700, Linus Torvalds wrote:
> > On Mon, Oct 20, 2014 at 2:56 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> > > Manage the VMAs with SRCU such that we can do a lockless VMA lookup.
> >
> > Can you explain why srcu, and not plain regular rcu?
> >
> > Especially as you then *note* some of the problems srcu can have.
> > Making it regular rcu would also seem to make it possible to make the
> > seqlock be just a seqcount, no?
>
> Because we need to hold onto the RCU read side lock across the entire
> fault, which can involve IO and all kinds of other blocking ops.
Hmmm... One optimization to do before we get into these changes is to work
on allowing the dropping of mmap_sem before we get to sleeping and I/O and
then reevaluate when I/O etc is complete? This is probably the longest
hold on mmap_sem that is also frequent. Then it may be easier to use
standard RCU later.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists