[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTiktmhP6-JGTKHf+wZhdJRd3Eb-PYzMRR=CyRsUP@mail.gmail.com>
Date: Wed, 23 Mar 2011 22:46:56 +0300
From: Andrey Kuzmin <andrey.v.kuzmin@...il.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Tejun Heo <tj@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Chris Mason <chris.mason@...cle.com>,
linux-kernel@...r.kernel.org, linux-btrfs@...r.kernel.org
Subject: Re: [RFC PATCH] mutex: Apply adaptive spinning on mutex_trylock()
On Wed, Mar 23, 2011 at 6:48 PM, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
> On Wed, Mar 23, 2011 at 8:37 AM, Tejun Heo <tj@...nel.org> wrote:
>>
>> Currently, mutex_trylock() doesn't use adaptive spinning. It tries
>> just once. I got curious whether using adaptive spinning on
>> mutex_trylock() would be beneficial and it seems so, at least for
>> btrfs anyway.
>
> Hmm. Seems reasonable to me.
TAS/spin with exponential back-off has been preferred locking approach
in Postgres (and I believe other DBMSes) for years, at least since '04
when I had last touched Postgres code. Even with 'false negative' cost
in user-space being much higher than in the kernel, it's still just a
question of scale (no wonder measurable improvement here is reported
from dbench on SSD capable of few dozen thousand IOPS).
Regards,
Andrey
> The patch looks clean, although part of that is just the mutex_spin()
> cleanup that is independent of actually using it in trylock.
>
> So no objections from me.
>
> Linus
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists