[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090114170445.GA18964@wotan.suse.de>
Date: Wed, 14 Jan 2009 18:04:45 +0100
From: Nick Piggin <npiggin@...e.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Avi Kivity <avi@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Gregory Haskins <ghaskins@...ell.com>,
Matthew Wilcox <matthew@....cx>,
Andi Kleen <andi@...stfloor.org>,
Chris Mason <chris.mason@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-btrfs <linux-btrfs@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Morreale <pmorreale@...ell.com>,
Sven Dietrich <SDietrich@...ell.com>,
Dmitry Adamushko <dmitry.adamushko@...il.com>
Subject: Re: [PATCH -v8][RFC] mutex: implement adaptive spinning
On Wed, Jan 14, 2009 at 05:46:39PM +0100, Peter Zijlstra wrote:
> On Mon, 2009-01-12 at 19:32 +0200, Avi Kivity wrote:
> > Peter Zijlstra wrote:
> > > Spinlocks can use 'pure' MCS locks.
> > >
> >
> > How about this, then. In mutex_lock(), keep wait_lock locked and only
> > release it when scheduling out. Waiter spinning naturally follows. If
> > spinlocks are cache friendly (are they today?)
>
> (no they're not, Nick's ticket locks still spin on a shared cacheline
> IIRC -- the MCS locks mentioned could fix this)
It reminds me. I wrote a basic variation of MCS spinlocks a while back. And
converted dcache lock to use it, which showed large dbench improvements on
a big machine (of course for different reasons than the dbench improvements
in this threaed).
http://lkml.org/lkml/2008/8/28/24
Each "lock" object is sane in size because given set of spin-local queues may
only be used once per lock stack. But any spinlocks within a mutex acquisition
will always be at the bottom of such a stack anyway, by definition.
If you can use any code or concept for your code, that would be great.
> > we inherit that. If
> > there is no contention on the mutex, then we don't need to reacquire the
> > wait_lock on mutex_unlock() (not that the atomic op is that expensive
> > these days).
>
> That might actually work, although we'd have to move the
> __mutex_slowpath_needs_to_unlock() branch outside wait_lock otherwise
> we'll deadlock :-)
>
> It might be worth trying this if we get serious fairness issues with the
> current construct.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists