lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160201143724.GW6357@twins.programming.kicks-ass.net>
Date:	Mon, 1 Feb 2016 15:37:24 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Will Deacon <will.deacon@....com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>
Cc:	linux-kernel@...r.kernel.org, Davidlohr Bueso <dave@...olabs.net>,
	Ingo Molnar <mingo@...nel.org>, parri.andrea@...il.com
Subject: [RFC][PATCH] locking/mcs: Fix ordering for mcs_spin_lock()


Given the below patch; we've now got an unconditional full global
barrier in, does this make the MCS spinlock RCsc ?

The 'problem' is that this barrier can happen before we actually acquire
the lock. That is, if we hit arch_mcs_spin_lock_contended() _that_ will
be the acquire barrier and we end up with a SYNC in between unlock and
lock -- ie. not an smp_mb__after_unlock_lock() equivalent.



---
Subject: locking/mcs: Fix ordering for mcs_spin_lock()
From: Peter Zijlstra <peterz@...radead.org>
Date: Mon Feb  1 15:11:28 CET 2016

Similar to commit b4b29f94856a ("locking/osq: Fix ordering of node
initialisation in osq_lock") the use of xchg_acquire() is
fundamentally broken with MCS like constructs.

Furthermore, it turns out we rely on the global transitivity of this
operation because the unlock path observes the pointer with a
READ_ONCE(), not an smp_load_acquire().

This is non-critical because the MCS code isn't actually used and
mostly serves as documentation, a stepping stone to the more complex
things we've build on top of the idea.

Cc: Will Deacon <will.deacon@....com>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Reported-by: Andrea Parri <parri.andrea@...il.com>
Fixes: 3552a07a9c4a ("locking/mcs: Use acquire/release semantics")
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
 kernel/locking/mcs_spinlock.h |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -67,7 +67,13 @@ void mcs_spin_lock(struct mcs_spinlock *
 	node->locked = 0;
 	node->next   = NULL;
 
-	prev = xchg_acquire(lock, node);
+	/*
+	 * We rely on the full barrier with global transitivity implied by the
+	 * below xchg() to order the initialization stores above against any
+	 * observation of @node. And to provide the ACQUIRE ordering associated
+	 * with a LOCK primitive.
+	 */
+	prev = xchg(lock, node);
 	if (likely(prev == NULL)) {
 		/*
 		 * Lock acquired, don't need to set node->locked to 1. Threads

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ