[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-7f56b58a92aaf2cab049f32a19af7cc57a3972f2@git.kernel.org>
Date: Fri, 27 Apr 2018 02:40:51 -0700
From: tip-bot for Jason Low <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: peterz@...radead.org, mingo@...nel.org, tglx@...utronix.de,
hpa@...or.com, jason.low2@...com, will.deacon@....com,
longman@...hat.com, torvalds@...ux-foundation.org,
linux-kernel@...r.kernel.org
Subject: [tip:locking/core] locking/mcs: Use smp_cond_load_acquire() in MCS
spin loop
Commit-ID: 7f56b58a92aaf2cab049f32a19af7cc57a3972f2
Gitweb: https://git.kernel.org/tip/7f56b58a92aaf2cab049f32a19af7cc57a3972f2
Author: Jason Low <jason.low2@...com>
AuthorDate: Thu, 26 Apr 2018 11:34:22 +0100
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Fri, 27 Apr 2018 09:48:49 +0200
locking/mcs: Use smp_cond_load_acquire() in MCS spin loop
For qspinlocks on ARM64, we would like to use WFE instead
of purely spinning. Qspinlocks internally have lock
contenders spin on an MCS lock.
Update arch_mcs_spin_lock_contended() such that it uses
the new smp_cond_load_acquire() so that ARM64 can also
override this spin loop with its own implementation using WFE.
On x86, this can also be cheaper than spinning on
smp_load_acquire().
Signed-off-by: Jason Low <jason.low2@...com>
Signed-off-by: Will Deacon <will.deacon@....com>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Waiman Long <longman@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: boqun.feng@...il.com
Cc: linux-arm-kernel@...ts.infradead.org
Cc: paulmck@...ux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-9-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/locking/mcs_spinlock.h | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index f046b7ce9dd6..5e10153b4d3c 100644
--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -23,13 +23,15 @@ struct mcs_spinlock {
#ifndef arch_mcs_spin_lock_contended
/*
- * Using smp_load_acquire() provides a memory barrier that ensures
- * subsequent operations happen after the lock is acquired.
+ * Using smp_cond_load_acquire() provides the acquire semantics
+ * required so that subsequent operations happen after the
+ * lock is acquired. Additionally, some architectures such as
+ * ARM64 would like to do spin-waiting instead of purely
+ * spinning, and smp_cond_load_acquire() provides that behavior.
*/
#define arch_mcs_spin_lock_contended(l) \
do { \
- while (!(smp_load_acquire(l))) \
- cpu_relax(); \
+ smp_cond_load_acquire(l, VAL); \
} while (0)
#endif
Powered by blists - more mailing lists