lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 25 Jan 2013 15:30:09 -0500
From:	Rik van Riel <riel@...hat.com>
To:	Michel Lespinasse <walken@...gle.com>
Cc:	Ingo Molnar <mingo@...hat.com>,
	"Paul E. McKenney" <paulmck@...ibm.com>,
	David Howells <dhowells@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Eric Dumazet <edumazet@...gle.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Manfred Spraul <manfred@...orfullife.com>,
	linux-kernel@...r.kernel.org
Subject: [RFC PATCH 7/6] kernel: document fast queue spinlocks

Document the fast queue spinlocks in a way that I can understand.

Signed-off-by: Rik van Riel <riel@...hat.com>
---
This may still not be clear to others. Please let me know if you
would like me to change/enhance the documentation, so you can
understand it too.

 kernel/queue_spinlock.c |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/kernel/queue_spinlock.c b/kernel/queue_spinlock.c
index b571508..dc740fe 100644
--- a/kernel/queue_spinlock.c
+++ b/kernel/queue_spinlock.c
@@ -9,6 +9,23 @@
 #include <asm/processor.h>	/* for cpu_relax() */
 #include <asm/queue_spinlock.h>
 
+/*
+ * Fast queue spinlocks use a pool of tokens, which contain the actual locks,
+ * and are continuously moved around. Every spinlock is associated with one
+ * token, and every CPU is associated with two tokens. 
+ *
+ * When taking a lock, one of the CPU's tokens is associated with the lock,
+ * and the lock's token is associated with the CPU.
+ *
+ * The token that gets associated with the spinlock at lock time will indicate
+ * the lock is busy. The token that was previously associated with the spinlock,
+ * and is now associated with the CPU taking the lock, will indicate whether
+ * the previous lock holder has already unlocked the lock.
+ *
+ * To unlock a fast ticket spinlock, the CPU will unlock the token that it
+ * associated with the spinlock.
+ */
+
 DEFINE_PER_CPU(struct q_spinlock_token *, q_spinlock_token[2]);
 
 static inline struct q_spinlock_token *
@@ -25,8 +42,11 @@ ____q_spin_lock(struct q_spinlock *lock,
 
 	token = __this_cpu_read(*percpu_token);
 	token->wait = true;
+	/* Associate our (marked busy) token with the spinlock. */
 	prev = xchg(&lock->token, token);
+	/* The spinlock's old token is ours now. */
 	__this_cpu_write(*percpu_token, prev);
+	/* Wait for the spinlock's old token to be unlocked. */
 	while (ACCESS_ONCE(prev->wait))
 		cpu_relax();
 	q_spin_lock_mb();	/* guarantee acquire load semantics */

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ