lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 29 Jun 2017 17:01:31 -0700 From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> To: linux-kernel@...r.kernel.org Cc: netfilter-devel@...r.kernel.org, netdev@...r.kernel.org, oleg@...hat.com, akpm@...ux-foundation.org, mingo@...hat.com, dave@...olabs.net, manfred@...orfullife.com, tj@...nel.org, arnd@...db.de, linux-arch@...r.kernel.org, will.deacon@....com, peterz@...radead.org, stern@...land.harvard.edu, parri.andrea@...il.com, torvalds@...ux-foundation.org, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>, Yoshinori Sato <ysato@...rs.sourceforge.jp>, Rich Felker <dalias@...c.org>, <linux-sh@...r.kernel.org> Subject: [PATCH RFC 23/26] sh: Remove spin_unlock_wait() arch-specific definitions There is no agreed-upon definition of spin_unlock_wait()'s semantics, and it appears that all callers could do just as well with a lock/unlock pair. This commit therefore removes the underlying arch-specific arch_spin_unlock_wait(). Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> Cc: Yoshinori Sato <ysato@...rs.sourceforge.jp> Cc: Rich Felker <dalias@...c.org> Cc: <linux-sh@...r.kernel.org> Cc: Will Deacon <will.deacon@....com> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Alan Stern <stern@...land.harvard.edu> Cc: Andrea Parri <parri.andrea@...il.com> Cc: Linus Torvalds <torvalds@...ux-foundation.org> --- arch/sh/include/asm/spinlock-cas.h | 5 ----- arch/sh/include/asm/spinlock-llsc.h | 5 ----- 2 files changed, 10 deletions(-) diff --git a/arch/sh/include/asm/spinlock-cas.h b/arch/sh/include/asm/spinlock-cas.h index c46e8cc7b515..5ed7dbbd94ff 100644 --- a/arch/sh/include/asm/spinlock-cas.h +++ b/arch/sh/include/asm/spinlock-cas.h @@ -29,11 +29,6 @@ static inline unsigned __sl_cas(volatile unsigned *p, unsigned old, unsigned new #define arch_spin_is_locked(x) ((x)->lock <= 0) #define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) -static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) -{ - smp_cond_load_acquire(&lock->lock, VAL > 0); -} - static inline void arch_spin_lock(arch_spinlock_t *lock) { while (!__sl_cas(&lock->lock, 1, 0)); diff --git a/arch/sh/include/asm/spinlock-llsc.h b/arch/sh/include/asm/spinlock-llsc.h index cec78143fa83..f77263aae760 100644 --- a/arch/sh/include/asm/spinlock-llsc.h +++ b/arch/sh/include/asm/spinlock-llsc.h @@ -21,11 +21,6 @@ #define arch_spin_is_locked(x) ((x)->lock <= 0) #define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock) -static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) -{ - smp_cond_load_acquire(&lock->lock, VAL > 0); -} - /* * Simple spin lock operations. There are two variants, one clears IRQ's * on the local processor, one does not. -- 2.5.2
Powered by blists - more mailing lists