[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFxHco9QKkzJshbae4athxGdCoZY0vBB3Y=mJ6deOLnJuQ@mail.gmail.com>
Date: Wed, 27 Feb 2013 19:19:28 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Rik van Riel <riel@...hat.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
"Vinod, Chegu" <chegu_vinod@...com>,
"Low, Jason" <jason.low2@...com>,
linux-tip-commits@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>, aquini@...hat.com,
Michel Lespinasse <walken@...gle.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [tip:core/locking] x86/smp: Move waiting on contended ticket lock
out of line
On Wed, Feb 27, 2013 at 6:58 PM, Rik van Riel <riel@...hat.com> wrote:
>
> On the other hand, both MCS and the fast queue locks
> implemented by Michel showed low variability and high
> performance.
On microbenchmarks, and when implemented for only one single subsystem, yes.
> The numbers for Michel's MCS and fast queue lock
> implementations appear to be both fast and stable.
I do think that doing specialized spinlocks for special areas may be a
rasonable approach, and it's quite possible that the SySV IPC thing is
one such area.
But no, I don't think the numbers I've seen for Michel's MCS are AT
ALL comparable to the generic spinlocks, and the interface makes them
incompatible as a replacement to even test in general.
Don't get me wrong: I think the targeted approach is *better*. I just
also happen to think that you spout big words about things that aren't
all that big, and try to make this a bigger deal than it is. The
benchmark numbers you point to are micro-benchmarks, and not all
comparable.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists