[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1363223574.25976.135.camel@thor.lan>
Date: Wed, 13 Mar 2013 21:12:54 -0400
From: Peter Hurley <peter@...leysoftware.com>
To: Michel Lespinasse <walken@...gle.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jiri Slaby <jslaby@...e.cz>,
Sasha Levin <levinsasha928@...il.com>,
Dave Jones <davej@...hat.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Shawn Guo <shawn.guo@...aro.org>, linux-kernel@...r.kernel.org,
linux-serial@...r.kernel.org
Subject: Re: [PATCH v5 00/44] ldisc patchset
On Wed, 2013-03-13 at 04:36 -0700, Michel Lespinasse wrote:
> Have you considered building your ldlock based on lib/rwsem-spinlock.c
> instead ? i.e. having an internal spinlock to protect the ldisc
> reference count and the reader and writer queues. This would seem much
> simpler get right. The downside would be that a spinlock would be
> taken for a short time whenever an ldisc reference is taken or
> released. I don't expect that the internal spinlock would get
> significant contention ?
That would have been too easy :)
TBH, I hadn't considered it until I was most the way through a working
atomic version. I had already split the reader/writer wait lists. And
figured out how to always use the wait bias for every waiting reader and
writer -- rather than the rwsem way of testing for an empty list --
which made the timeout handling easier.
At the time, the only thing that I was still struggling with was
recursion, and the spinlock flavor wasn't going to fix that. So I just
kept with the atomic flavor.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists