[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170702035807.tnzmkynyevfobt5a@tardis>
Date: Sun, 2 Jul 2017 11:58:07 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
parri.andrea@...il.com, dave@...olabs.net,
manfred@...orfullife.com, arnd@...db.de, peterz@...radead.org,
netdev@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
will.deacon@....com, oleg@...hat.com, mingo@...hat.com,
netfilter-devel@...r.kernel.org, tj@...nel.org,
stern@...land.harvard.edu, akpm@...ux-foundation.org,
torvalds@...ux-foundation.org, Paul Mackerras <paulus@...ba.org>
Subject: Re: [PATCH RFC 21/26] powerpc: Remove spin_unlock_wait()
arch-specific definitions
On Thu, Jun 29, 2017 at 05:01:29PM -0700, Paul E. McKenney wrote:
> There is no agreed-upon definition of spin_unlock_wait()'s semantics,
> and it appears that all callers could do just as well with a lock/unlock
> pair. This commit therefore removes the underlying arch-specific
> arch_spin_unlock_wait().
>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
> Cc: Paul Mackerras <paulus@...ba.org>
> Cc: Michael Ellerman <mpe@...erman.id.au>
> Cc: <linuxppc-dev@...ts.ozlabs.org>
> Cc: Will Deacon <will.deacon@....com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Alan Stern <stern@...land.harvard.edu>
> Cc: Andrea Parri <parri.andrea@...il.com>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Acked-by: Boqun Feng <boqun.feng@...il.com>
Regards,
Boqun
> ---
> arch/powerpc/include/asm/spinlock.h | 33 ---------------------------------
> 1 file changed, 33 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 8c1b913de6d7..d256e448ea49 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -170,39 +170,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
> lock->slock = 0;
> }
>
> -static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> -{
> - arch_spinlock_t lock_val;
> -
> - smp_mb();
> -
> - /*
> - * Atomically load and store back the lock value (unchanged). This
> - * ensures that our observation of the lock value is ordered with
> - * respect to other lock operations.
> - */
> - __asm__ __volatile__(
> -"1: " PPC_LWARX(%0, 0, %2, 0) "\n"
> -" stwcx. %0, 0, %2\n"
> -" bne- 1b\n"
> - : "=&r" (lock_val), "+m" (*lock)
> - : "r" (lock)
> - : "cr0", "xer");
> -
> - if (arch_spin_value_unlocked(lock_val))
> - goto out;
> -
> - while (lock->slock) {
> - HMT_low();
> - if (SHARED_PROCESSOR)
> - __spin_yield(lock);
> - }
> - HMT_medium();
> -
> -out:
> - smp_mb();
> -}
> -
> /*
> * Read-write spinlocks, allowing multiple readers
> * but only one writer.
> --
> 2.5.2
>
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists