[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <561D6BBB.6070706@hpe.com>
Date: Tue, 13 Oct 2015 16:38:19 -0400
From: Waiman Long <waiman.long@....com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org,
Scott J Norton <scott.norton@....com>,
Douglas Hatch <doug.hatch@....com>,
Davidlohr Bueso <dave@...olabs.net>,
Will Deacon <will.deacon@....com>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
boqun.feng@...il.com
Subject: Re: [PATCH v7 1/5] locking/qspinlock: relaxes cmpxchg & xchg ops
in native code
On 10/13/2015 02:02 PM, Peter Zijlstra wrote:
> On Tue, Sep 22, 2015 at 04:50:40PM -0400, Waiman Long wrote:
>> This patch replaces the cmpxchg() and xchg() calls in the native
>> qspinlock code with more relaxed versions of those calls to enable
>> other architectures to adopt queued spinlocks with less performance
>> overhead.
>> @@ -62,7 +63,7 @@ static __always_inline int queued_spin_is_contended(struct qspinlock *lock)
>> static __always_inline int queued_spin_trylock(struct qspinlock *lock)
>> {
>> if (!atomic_read(&lock->val)&&
>> - (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
>> + (atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL) == 0))
>> return 1;
>> return 0;
>> }
>> @@ -77,7 +78,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock)
>> {
>> u32 val;
>>
>> - val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>> + val = atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL);
>> if (likely(val == 0))
>> return;
>> queued_spin_lock_slowpath(lock, val);
>> @@ -319,7 +329,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>> if (val == new)
>> new |= _Q_PENDING_VAL;
>>
>> - old = atomic_cmpxchg(&lock->val, val, new);
>> + old = atomic_cmpxchg_acquire(&lock->val, val, new);
>> if (old == val)
>> break;
>>
> So given recent discussion, all this _release/_acquire stuff is starting
> to worry me.
>
> So we've not declared if they should be RCsc or RCpc, and given this
> patch (and the previous ones) these lock primitives turn into RCpc if
> the atomic primitives are RCpc.
>
> So far only the proposed PPC implementation is RCpc -- and their current
> spinlock implementation is also RCpc, but that is a point of discussion.
>
> Just saying..
Davidlohr's patches to make similar changes in other locking code will
also have this issue. Anyway, the goal of this patch is to make the
generic qspinlock code less costly when ported to other architectures.
This change will have no effect on the x86 architecture which is the
only one using qspinlock at the moment.
>
> Also, I think we should annotate the control dependencies in these
> things.
Will do.
Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists