[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <51E9B13E.5060602@hp.com>
Date: Fri, 19 Jul 2013 17:35:58 -0400
From: Waiman Long <waiman.long@...com>
To: George Spelvin <linux@...izon.com>
CC: JBeulich@...ell.com, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, mingo@...nel.org, tglx@...utronix.de
Subject: Re: [PATCH RFC 1/2] qrwlock: A queue read/write lock implementation
On 07/19/2013 05:11 PM, George Spelvin wrote:
>> What I have in mind is to have 2 separate rwlock initializers - one for
>> fair and one for reader-bias behavior. So the lock owners can decide
>> what behavior do they want with a one line change.
> That's definitely a nicer patch, if it will work. I was imagining that,
> even for a single (type of) lock, only a few uses require reader bias
> (because they might be recursive, or are in an interrupt), but you'd
> want most read_lock sites to be fair.
Yes, fair rwlock will be the default.
> Deciding on a per-lock basis means that one potentially recursive call
> means you can't use fair queueing anywhere.
>
> I was hoping that the number of necessary unfair calls would
> be small enough that making the read_lock default fair and
> only marking the unfair call sites would be enough.
>
> But I don't really know until doing a survey of the calls.
I think so. The queue read/write lock, if merged, will be an optional
feature for people to try out to see if they see any problem in any of
the existing rwlock. So far, I didn't encounter any problem in my testing.
BTW, I also tried my version of the rwlock without the waiting queue. In
high contention case, it performs slightly better than the
__read_lock_failed changes suggested by Ingo at least for the
reader-bias one. It is still not as good as the full version with the
waiting queue. I should be able to provide more performance data next week.
Regards,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists