[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwHxv9RH75BoXYcwdKeHx+otOOpg9494TTdFwQe3biOhw@mail.gmail.com>
Date: Wed, 27 Feb 2013 12:18:13 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Rik van Riel <riel@...hat.com>
Cc: Ingo Molnar <mingo@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>, aquini@...hat.com,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Michel Lespinasse <walken@...gle.com>,
linux-tip-commits@...r.kernel.org,
Steven Rostedt <rostedt@...dmis.org>,
"Vinod, Chegu" <chegu_vinod@...com>,
"Low, Jason" <jason.low2@...com>
Subject: Re: [tip:core/locking] x86/smp: Move waiting on contended ticket lock
out of line
On Wed, Feb 27, 2013 at 11:53 AM, Rik van Riel <riel@...hat.com> wrote:
>
> If we have two classes of spinlocks, I suspect we would be better
> off making those high-demand spinlocks MCS or LCH locks, which have
> the property that having N+1 CPUs contend on the lock will never
> result in slower aggregate throughput than having N CPUs contend.
I doubt that.
The fancy "no slowdown" locks almost never work in practice. They
scale well by performing really badly for the normal case, either
needing separate allocations or having memory ordering problems
requiring multiple locked cycles.
A spinlock basically needs to have a fast-case that is a single locked
instruction, and all the clever ones tend to fail that simple test.
> I can certainly take profiles of various workloads, but there is
> absolutely no guarantee that I will see the same bottlenecks that
> eg. the people at HP have seen. The largest test system I currently
> have access to has 40 cores, vs. the 80 cores in the (much more
> interesting) HP results I pasted.
>
> Would you also be interested in performance numbers (and profiles)
> of a kernel that has bottleneck spinlocks replaced with MCS locks?
MCS locks don't even work, last time I saw. They need that extra lock
holder allocation, which forces people to have different calling
conventions, and is just a pain. Or am I confusing them with something
else?
They might work for the special cases like the sleeping locks, which
have one or two places that take and release the lock, but not for the
generic spinlock.
Also, it might be worth trying current git - if it's a rwsem that is
implicated, the new lock stealing might be a win.
So before even trying anything fancy, just basic profiles would be
good to see which lock it is. Many of the really bad slowdowns are
actually about the timing details of the sleeping locks (do *not*
enable lock debugging etc for profiling, you want the mutex spinning
code to be active, for example).
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists