lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1479082460.394274199@decadent.org.uk>
Date:   Mon, 14 Nov 2016 00:14:20 +0000
From:   Ben Hutchings <ben@...adent.org.uk>
To:     linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC:     akpm@...ux-foundation.org, "Will Deacon" <will.deacon@....com>,
        "Alan Stern" <stern@...land.harvard.edu>,
        "Catalin Marinas" <catalin.marinas@....com>,
        "Peter Zijlstra" <peterz@...radead.org>
Subject: [PATCH 3.16 261/346] arm64: spinlocks: implement smp_mb__before_spinlock()
 as smp_mb()

3.16.39-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Will Deacon <will.deacon@....com>

commit 872c63fbf9e153146b07f0cece4da0d70b283eeb upstream.

smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation
to a full barrier, such that prior stores are ordered with respect to
loads and stores occuring inside the critical section.

Unfortunately, the core code defines the barrier as smp_wmb(), which
is insufficient to provide the required ordering guarantees when used in
conjunction with our load-acquire-based spinlock implementation.

This patch overrides the arm64 definition of smp_mb__before_spinlock()
to map to a full smp_mb().

Cc: Peter Zijlstra <peterz@...radead.org>
Reported-by: Alan Stern <stern@...land.harvard.edu>
Signed-off-by: Will Deacon <will.deacon@....com>
Signed-off-by: Catalin Marinas <catalin.marinas@....com>
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
 arch/arm64/include/asm/spinlock.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -231,4 +231,14 @@ static inline int arch_read_trylock(arch
 #define arch_read_relax(lock)	cpu_relax()
 #define arch_write_relax(lock)	cpu_relax()
 
+/*
+ * Accesses appearing in program order before a spin_lock() operation
+ * can be reordered with accesses inside the critical section, by virtue
+ * of arch_spin_lock being constructed using acquire semantics.
+ *
+ * In cases where this is problematic (e.g. try_to_wake_up), an
+ * smp_mb__before_spinlock() can restore the required ordering.
+ */
+#define smp_mb__before_spinlock()	smp_mb()
+
 #endif /* __ASM_SPINLOCK_H */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ