lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1472477669-27508-5-git-send-email-manfred@colorfullife.com>
Date:   Mon, 29 Aug 2016 15:34:29 +0200
From:   Manfred Spraul <manfred@...orfullife.com>
To:     benh@...nel.crashing.org, paulmck@...ux.vnet.ibm.com,
        Ingo Molnar <mingo@...e.hu>, Boqun Feng <boqun.feng@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     LKML <linux-kernel@...r.kernel.org>, 1vier1@....de,
        Davidlohr Bueso <dave@...olabs.net>,
        Manfred Spraul <manfred@...orfullife.com>
Subject: [PATCH 4/4 V4] qspinlock for x86: smp_mb__after_spin_lock() is free

For x86 qspinlocks, no additional memory barrier is required in
smp_mb__after_spin_lock:

Theoretically, for qspinlock we could define two barriers:
- smp_mb__after_spin_lock: Free for x86, not free for powerpc
- smp_mb__between_spin_lock_and_spin_unlock_wait():
	Free for all archs, see queued_spin_unlock_wait for details.

As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used
in any hotpaths, the patch does not create that define yet.

Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
---
 arch/x86/include/asm/qspinlock.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index eaba080..04d26ed 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -61,6 +61,17 @@ static inline bool virt_spin_lock(struct qspinlock *lock)
 }
 #endif /* CONFIG_PARAVIRT */
 
+#ifndef smp_mb__after_spin_lock
+/**
+ * smp_mb__after_spin_lock() - Provide smp_mb() after spin_lock
+ *
+ * queued_spin_lock() provides full memory barriers semantics,
+ * thus no further memory barrier is required. See
+ * queued_spin_unlock_wait() for further details.
+ */
+#define smp_mb__after_spin_lock()	do { } while (0)
+#endif
+
 #include <asm-generic/qspinlock.h>
 
 #endif /* _ASM_X86_QSPINLOCK_H */
-- 
2.5.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ