[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180406105417.GA27619@arm.com>
Date: Fri, 6 Apr 2018 11:54:17 +0100
From: Will Deacon <will.deacon@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
mingo@...nel.org, boqun.feng@...il.com, paulmck@...ux.vnet.ibm.com,
catalin.marinas@....com, Waiman Long <longman@...hat.com>
Subject: Re: [PATCH 03/10] locking/qspinlock: Kill cmpxchg loop when claiming
lock from head of queue
On Thu, Apr 05, 2018 at 07:19:12PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 05, 2018 at 05:59:00PM +0100, Will Deacon wrote:
> > +
> > + /* In the PV case we might already have _Q_LOCKED_VAL set */
> > + if ((val & _Q_TAIL_MASK) == tail) {
> > /*
> > * The smp_cond_load_acquire() call above has provided the
> > + * necessary acquire semantics required for locking.
> > */
> > old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);
> > if (old == val)
> > + goto release; /* No contention */
> > }
>
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -464,8 +464,7 @@ void queued_spin_lock_slowpath(struct qs
> * The smp_cond_load_acquire() call above has provided the
> * necessary acquire semantics required for locking.
> */
> - old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);
> - if (old == val)
> + if (atomic_try_cmpxchg_release(&lock->val, &val, _Q_LOCKED_VAL))
> goto release; /* No contention */
> }
>
> Does that also work for you? It would generate slightly better code for
> x86 (not that it would matter much on this path).
Assuming you meant to use atomic_try_cmpxchg_relaxed, then that works for
me too.
Will
Powered by blists - more mailing lists