[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160606045959.GE23133@insomnia>
Date: Mon, 6 Jun 2016 12:59:59 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: Michael Ellerman <mpe@...erman.id.au>
Cc: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will.deacon@....com>,
Paul Mackerras <paulus@...ba.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [v2] powerpc: spinlock: Fix spin_unlock_wait()
On Mon, Jun 06, 2016 at 02:52:05PM +1000, Michael Ellerman wrote:
> On Fri, 2016-03-06 at 03:49:48 UTC, Boqun Feng wrote:
> > There is an ordering issue with spin_unlock_wait() on powerpc, because
> > the spin_lock primitive is an ACQUIRE and an ACQUIRE is only ordering
> > the load part of the operation with memory operations following it.
>
> ...
> > diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> > index 523673d7583c..2ed893662866 100644
> > --- a/arch/powerpc/include/asm/spinlock.h
> > +++ b/arch/powerpc/include/asm/spinlock.h
> > @@ -162,12 +181,23 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
> > lock->slock = 0;
> > }
> >
> > -#ifdef CONFIG_PPC64
> > -extern void arch_spin_unlock_wait(arch_spinlock_t *lock);
> > -#else
> > -#define arch_spin_unlock_wait(lock) \
> > - do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0)
> > -#endif
> > +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> > +{
> > + smp_mb();
> > +
> > + if (!arch_spin_is_locked_sync(lock))
> > + goto out;
> > +
> > + while (!arch_spin_value_unlocked(*lock)) {
> > + HMT_low();
> > + if (SHARED_PROCESSOR)
> > + __spin_yield(lock);
> > + }
> > + HMT_medium();
> > +
> > +out:
> > + smp_mb();
> > +}
>
> I think this would actually be easier to follow if it was all just in one routine:
>
> static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> {
> arch_spinlock_t lock_val;
>
> smp_mb();
>
> /*
> * Atomically load and store back the lock value (unchanged). This
> * ensures that our observation of the lock value is ordered with
> * respect to other lock operations.
> */
> __asm__ __volatile__(
> "1: " PPC_LWARX(%0, 0, %2, 1) "\n"
> " stwcx. %0, 0, %2\n"
> " bne- 1b\n"
> : "=&r" (lock_val), "+m" (*lock)
> : "r" (lock)
> : "cr0", "xer");
>
> if (arch_spin_value_unlocked(lock_val))
> goto out;
>
> while (!arch_spin_value_unlocked(*lock)) {
> HMT_low();
> if (SHARED_PROCESSOR)
> __spin_yield(lock);
> }
> HMT_medium();
>
> out:
> smp_mb();
> }
>
>
> Thoughts?
>
Make sense. I admit that I sort of overdesigned by introducing
arch_spin_is_locked_sync().
This version is better, thank you!
Regards,
Boqun
> cheers
Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)
Powered by blists - more mailing lists