[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <e5eb7be2db286ec3fdaf5cf5451e653d6a6d95b8.1475144721.git.jslaby@suse.cz>
Date: Thu, 29 Sep 2016 12:25:25 +0200
From: Jiri Slaby <jslaby@...e.cz>
To: stable@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, Will Deacon <will.deacon@....com>,
Peter Zijlstra <peterz@...radead.org>,
Catalin Marinas <catalin.marinas@....com>,
Jiri Slaby <jslaby@...e.cz>
Subject: [PATCH 3.12 089/119] arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()
From: Will Deacon <will.deacon@....com>
3.12-stable review patch. If anyone has any objections, please let me know.
===============
commit 872c63fbf9e153146b07f0cece4da0d70b283eeb upstream.
smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation
to a full barrier, such that prior stores are ordered with respect to
loads and stores occuring inside the critical section.
Unfortunately, the core code defines the barrier as smp_wmb(), which
is insufficient to provide the required ordering guarantees when used in
conjunction with our load-acquire-based spinlock implementation.
This patch overrides the arm64 definition of smp_mb__before_spinlock()
to map to a full smp_mb().
Cc: Peter Zijlstra <peterz@...radead.org>
Reported-by: Alan Stern <stern@...land.harvard.edu>
Signed-off-by: Will Deacon <will.deacon@....com>
Signed-off-by: Catalin Marinas <catalin.marinas@....com>
Signed-off-by: Jiri Slaby <jslaby@...e.cz>
---
arch/arm64/include/asm/spinlock.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index 0defa0728a9b..c3cab6f87de4 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -200,4 +200,14 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
+/*
+ * Accesses appearing in program order before a spin_lock() operation
+ * can be reordered with accesses inside the critical section, by virtue
+ * of arch_spin_lock being constructed using acquire semantics.
+ *
+ * In cases where this is problematic (e.g. try_to_wake_up), an
+ * smp_mb__before_spinlock() can restore the required ordering.
+ */
+#define smp_mb__before_spinlock() smp_mb()
+
#endif /* __ASM_SPINLOCK_H */
--
2.10.0
Powered by blists - more mailing lists