[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 6 Jun 2016 13:56:55 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Michael Ellerman <mpe@...erman.id.au>
Cc: linuxppc-dev@...ts.ozlabs.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Will Deacon <will.deacon@....com>,
Boqun Feng <boqun.feng@...il.com>
Subject: Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
On Mon, Jun 06, 2016 at 09:42:20PM +1000, Michael Ellerman wrote:
> +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> +{
> + arch_spinlock_t lock_val;
> +
> + smp_mb();
> +
> + /*
> + * Atomically load and store back the lock value (unchanged). This
> + * ensures that our observation of the lock value is ordered with
> + * respect to other lock operations.
> + */
> + __asm__ __volatile__(
> +"1: " PPC_LWARX(%0, 0, %2, 0) "\n"
> +" stwcx. %0, 0, %2\n"
> +" bne- 1b\n"
> + : "=&r" (lock_val), "+m" (*lock)
> + : "r" (lock)
> + : "cr0", "xer");
> +
> + if (arch_spin_value_unlocked(lock_val))
> + goto out;
> +
> + while (!arch_spin_value_unlocked(*lock)) {
> + HMT_low();
> + if (SHARED_PROCESSOR)
> + __spin_yield(lock);
> + }
> + HMT_medium();
> +
> +out:
> + smp_mb();
> +}
Why the move to in-line this implementation? It looks like a fairly big
function.
Powered by blists - more mailing lists