[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1460618018.2871.25.camel@j-VirtualBox>
Date: Thu, 14 Apr 2016 00:13:38 -0700
From: Jason Low <jason.low2@...com>
To: Will Deacon <will.deacon@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>,
terry.rudd@....com, "Long, Wai Man" <waiman.long@....com>,
"boqun.feng@...il.com" <boqun.feng@...il.com>,
"dave@...olabs.net" <dave@...olabs.net>, jason.low2@...com
Subject: [RFC] arm64: Implement WFE based spin wait for MCS spinlocks
Use WFE to avoid most spinning with MCS spinlocks. This is implemented
with the new cmpwait() mechanism for comparing and waiting for the MCS
locked value to change using LDXR + WFE.
Signed-off-by: Jason Low <jason.low2@...com>
---
arch/arm64/include/asm/mcs_spinlock.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
create mode 100644 arch/arm64/include/asm/mcs_spinlock.h
diff --git a/arch/arm64/include/asm/mcs_spinlock.h b/arch/arm64/include/asm/mcs_spinlock.h
new file mode 100644
index 0000000..d295d9d
--- /dev/null
+++ b/arch/arm64/include/asm/mcs_spinlock.h
@@ -0,0 +1,21 @@
+#ifndef __ASM_MCS_SPINLOCK_H
+#define __ASM_MCS_SPINLOCK_H
+
+#define arch_mcs_spin_lock_contended(l) \
+do { \
+ int locked_val; \
+ for (;;) { \
+ locked_val = READ_ONCE(*l); \
+ if (locked_val) \
+ break; \
+ cmpwait(l, locked_val); \
+ } \
+ smp_rmb(); \
+} while (0)
+
+#define arch_mcs_spin_unlock_contended(l) \
+do { \
+ smp_store_release(l, 1); \
+} while (0)
+
+#endif /* __ASM_MCS_SPINLOCK_H */
--
2.1.4
Powered by blists - more mailing lists