[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1231517031.442.15.camel@twins>
Date: Fri, 09 Jan 2009 17:03:51 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Chris Mason <chris.mason@...cle.com>,
Ingo Molnar <mingo@...e.hu>, paulmck@...ux.vnet.ibm.com,
Gregory Haskins <ghaskins@...ell.com>,
Matthew Wilcox <matthew@....cx>,
Andi Kleen <andi@...stfloor.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-btrfs <linux-btrfs@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Nick Piggin <npiggin@...e.de>,
Peter Morreale <pmorreale@...ell.com>,
Sven Dietrich <SDietrich@...ell.com>
Subject: Re: [PATCH -v7][RFC]: mutex: implement adaptive spinning
On Fri, 2009-01-09 at 10:59 -0500, Steven Rostedt wrote:
> >
> > Adding that blocking on !owner utterly destroys everything.
>
> I was going to warn you about that ;-)
>
> Without the check for !owner, you are almost guaranteed to go to sleep
> every time. Here's why:
>
> You are spinning and thus have a hot cache on that CPU.
>
> The owner goes to unlock but will be in a cold cache. It sets lock->owner
> to NULL, but is still in cold cache so it is a bit slower.
>
> Once the spinner sees the NULL, it shoots out of the spin but sees the
> lock is still not available then goes to sleep. All before the owner could
> release it. This could probably happen at every contention. Thus, you lose
> the benefit of spinning. You probably make things worse because you add a
> spin before every sleep.
Which is why I changed the inner loop to:
l_owner = ACCESS_ONCE(lock->owner)
if (l_owner && l_owner != owner)
break
So that that would continue spinning.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists