[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5305054B.5020601@hp.com>
Date: Wed, 19 Feb 2014 14:26:03 -0500
From: Waiman Long <waiman.long@...com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
linux-arch@...r.kernel.org, x86@...nel.org,
linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michel Lespinasse <walken@...gle.com>,
Andi Kleen <andi@...stfloor.org>,
Rik van Riel <riel@...hat.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
George Spelvin <linux@...izon.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Daniel J Blueman <daniel@...ascale.com>,
Alexander Fyodorov <halcy@...dex.ru>,
Aswin Chandramouleeswaran <aswin@...com>,
Scott J Norton <scott.norton@...com>,
Thavatchai Makphaibulchoke <thavatchai.makpahibulchoke@...com>
Subject: Re: [PATCH v4 1/3] qspinlock: Introducing a 4-byte queue spinlock
implementation
On 02/19/2014 03:52 AM, Peter Zijlstra wrote:
> On Tue, Feb 18, 2014 at 07:50:13PM -0500, Waiman Long wrote:
>> On 02/18/2014 04:34 PM, Peter Zijlstra wrote:
>>> On Tue, Feb 18, 2014 at 02:39:31PM -0500, Waiman Long wrote:
>>>> The #ifdef is harder to take away here. The point is that doing a 32-bit
>>>> exchange may accidentally steal the lock with the additional code to handle
>>>> that. Doing a 16-bit exchange, on the other hand, will never steal the lock
>>>> and so don't need the extra handling code. I could construct a function with
>>>> different return values to handle the different cases if you think it will
>>>> make the code easier to read.
>>> Does it really pay to use xchg() with all those fixup cases? Why not
>>> have a single cmpxchg() loop that does just the exact atomic op you
>>> want?
>> The main reason for using xchg instead of cmpxchg is its performance impact
>> when the lock is heavily contended. Under those circumstances, a task may
>> need to do several tries of read+atomic-RMV before getting it right. This
>> may cause a lot of cacheline contention. With xchg, we need at most 2 atomic
>> ops. Using cmpxchg() does simplify the code a bit at the expense of
>> performance with heavy contention.
> Have you actually measured this?
I haven't actually measured that myself. It is mostly from my
experience. I could do some timing experiment with the cmpxchg() change
and report back to you later.
-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists