lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 31 Aug 2016 15:42:28 +0200
From:   Manfred Spraul <manfred@...orfullife.com>
To:     benh@...nel.crashing.org, paulmck@...ux.vnet.ibm.com,
        Ingo Molnar <mingo@...e.hu>, Boqun Feng <boqun.feng@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     LKML <linux-kernel@...r.kernel.org>, 1vier1@....de,
        Davidlohr Bueso <dave@...olabs.net>,
        Manfred Spraul <manfred@...orfullife.com>
Subject: [PATCH 3/5] spinlock: define spinlock_store_acquire

A spinlock is an ACQUIRE regarding reading the lock state.
The store of the lock may be postponed past the first operations
within the protected area, e.g. see commit 51d7d5205d33
("powerpc: Add smp_mb() to arch_spin_is_locked()".

The patch defines a new spinlock_store_acquire primitive:
It guarantees that the store is ordered relative to the following
load/store operations. Adding the barrier into spin_is_locked()
does help as not everyone uses spin_is_locked() or
spin_unlock_wait().

The patch:
- adds a definition into <linux/spinlock.h> (as smp_mb(), which is
  safe for all architectures)
- converts ipc/sem.c to the new define.

For overriding, the same approach as for smp_mb__before_spin_lock()
is used: If smp_mb__after_spin_lock is already defined, then it is
not changed.

The default is smp_mb(), to ensure that no architecture gets broken.

Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
---
 include/linux/spinlock.h | 12 ++++++++++++
 ipc/sem.c                |  8 +-------
 2 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 47dd0ce..496f288 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -130,6 +130,18 @@ do {								\
 #define smp_mb__before_spinlock()	smp_wmb()
 #endif
 
+#ifndef spinlock_store_acquire
+/**
+ * spinlock_store_acuqire() - Provide acquire() after store part
+ *
+ * spin_lock() provides ACQUIRE semantics regarding reading the lock.
+ * There are no guarantees that the lock write is visible before any read
+ * read or write operation within the protected area is performed.
+ * If the lock write must happen first, this function is required.
+ */
+#define spinlock_store_acquire()	smp_mb()
+#endif
+
 /**
  * raw_spin_unlock_wait - wait until the spinlock gets unlocked
  * @lock: the spinlock in question.
diff --git a/ipc/sem.c b/ipc/sem.c
index 6586e0a..49d0ae3 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -355,13 +355,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
 		 */
 		spin_lock(&sem->lock);
 
-		/*
-		 * See 51d7d5205d33
-		 * ("powerpc: Add smp_mb() to arch_spin_is_locked()"):
-		 * A full barrier is required: the write of sem->lock
-		 * must be visible before the read is executed
-		 */
-		smp_mb();
+		spinlock_store_acquire();
 
 		if (!smp_load_acquire(&sma->complex_mode)) {
 			/* fast path successful! */
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ