[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0912010908590.2872@localhost.localdomain>
Date: Tue, 1 Dec 2009 09:15:05 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Arnd Bergmann <arnd@...db.de>
cc: Nick Piggin <npiggin@...e.de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [rfc] "fair" rw spinlocks
On Tue, 1 Dec 2009, Arnd Bergmann wrote:
> On Monday 30 November 2009, Linus Torvalds wrote:
> > The best option really would be to try to make it all use RCU, rather than
> > paper over things. That really should improve performance.
>
> Are there any writers at interrupt time?
No, there can't be. That would already be a deadlock, since we take the
read lock without irq protection (exactly because many of the read-lockers
are pretty performance-sensitive).
> If not, another option might be to first convert all the readers that
> can happen from interrupts to RCU, which lets us get rid of the irq
> disable in the write path.
If you convert the irq readers, you generally really need to convert the
rest too. In particular, you still need to convert the write-side to use
the RCU versions of the insert/remove code, and to free the things from
RCU in order for it all to be safe (think: irq reader on another CPU than
the writer, now without any locking).
So you really don't win all that much. At a minimum, you always have to
convert all the writers to use RCU (even if you then keep the rwlock as
the exclusion model), and since that involves a large portion of the
complexity (including at least the RCU freeing side), what you end up with
is that you can avoid converting _some_ of the readers.
So I do agree that you can do things in two stages, but I suspect the irq
disable on the write path part is the least of our problems.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists