lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 29 Jun 2017 17:01:29 -0700 From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> To: linux-kernel@...r.kernel.org Cc: netfilter-devel@...r.kernel.org, netdev@...r.kernel.org, oleg@...hat.com, akpm@...ux-foundation.org, mingo@...hat.com, dave@...olabs.net, manfred@...orfullife.com, tj@...nel.org, arnd@...db.de, linux-arch@...r.kernel.org, will.deacon@....com, peterz@...radead.org, stern@...land.harvard.edu, parri.andrea@...il.com, torvalds@...ux-foundation.org, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>, Benjamin Herrenschmidt <benh@...nel.crashing.org>, Paul Mackerras <paulus@...ba.org>, Michael Ellerman <mpe@...erman.id.au>, <linuxppc-dev@...ts.ozlabs.org> Subject: [PATCH RFC 21/26] powerpc: Remove spin_unlock_wait() arch-specific definitions There is no agreed-upon definition of spin_unlock_wait()'s semantics, and it appears that all callers could do just as well with a lock/unlock pair. This commit therefore removes the underlying arch-specific arch_spin_unlock_wait(). Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org> Cc: Paul Mackerras <paulus@...ba.org> Cc: Michael Ellerman <mpe@...erman.id.au> Cc: <linuxppc-dev@...ts.ozlabs.org> Cc: Will Deacon <will.deacon@....com> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Alan Stern <stern@...land.harvard.edu> Cc: Andrea Parri <parri.andrea@...il.com> Cc: Linus Torvalds <torvalds@...ux-foundation.org> --- arch/powerpc/include/asm/spinlock.h | 33 --------------------------------- 1 file changed, 33 deletions(-) diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 8c1b913de6d7..d256e448ea49 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -170,39 +170,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock) lock->slock = 0; } -static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) -{ - arch_spinlock_t lock_val; - - smp_mb(); - - /* - * Atomically load and store back the lock value (unchanged). This - * ensures that our observation of the lock value is ordered with - * respect to other lock operations. - */ - __asm__ __volatile__( -"1: " PPC_LWARX(%0, 0, %2, 0) "\n" -" stwcx. %0, 0, %2\n" -" bne- 1b\n" - : "=&r" (lock_val), "+m" (*lock) - : "r" (lock) - : "cr0", "xer"); - - if (arch_spin_value_unlocked(lock_val)) - goto out; - - while (lock->slock) { - HMT_low(); - if (SHARED_PROCESSOR) - __spin_yield(lock); - } - HMT_medium(); - -out: - smp_mb(); -} - /* * Read-write spinlocks, allowing multiple readers * but only one writer. -- 2.5.2
Powered by blists - more mailing lists