[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.1.10.0808211356360.3487@nehalem.linux-foundation.org>
Date: Thu, 21 Aug 2008 14:00:22 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
"H. Peter Anvin" <hpa@...or.com>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>, Joe Perches <joe@...ches.com>,
linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC PATCH] Writer-biased low-latency rwlock v8
On Thu, 21 Aug 2008, Mathieu Desnoyers wrote:
>
> Given that I need to limit the number of readers to a smaller amount of
> bits
I stopped reading right there.
Why?
Because that is already crap.
Go look at my code once more. Go look at how it has 128 bits of data,
exactly so that it DOES NOT HAVE TO LIMIT THE NUMBER OF READERS.
And then go look at it again.
Look at it five times, and until you can understand that it still uses
just a 32-bit word for the fast-path and no unnecessarily crap in it, but
it actually has 128 bits of data for all the slow paths, don't bother
emailing me any new versions.
Please. You -still- apparently haven't looked at it, at least not enough
to understand the _point_ of it. You still go on about trying to fit in
three or four different numbers in that one word. Even though the whole
point of my rwlock is that you need exactly _one_ count (active writers),
and _one_ bit (active reader) and _one_ extra bit ("contention, go to slow
path, look at the other bits ONLY IN THE SLOW PATH!")
That leaves 30 bits for readers. If you still think you need to "limit the
number of readers", then you aren't getting it.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists