[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-ca50e426f96c905e7d14a9c7a6bd4e0330516047@git.kernel.org>
Date: Wed, 8 Jun 2016 07:27:14 -0700
From: tip-bot for Pan Xinhui <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: tglx@...utronix.de, xinhui.pan@...ux.vnet.ibm.com,
mingo@...nel.org, torvalds@...ux-foundation.org,
paulmck@...ux.vnet.ibm.com, akpm@...ux-foundation.org,
peterz@...radead.org, linux-kernel@...r.kernel.org, hpa@...or.com
Subject: [tip:locking/core] locking/qspinlock: Use
atomic_sub_return_release() in queued_spin_unlock()
Commit-ID: ca50e426f96c905e7d14a9c7a6bd4e0330516047
Gitweb: http://git.kernel.org/tip/ca50e426f96c905e7d14a9c7a6bd4e0330516047
Author: Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>
AuthorDate: Fri, 3 Jun 2016 16:38:14 +0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 8 Jun 2016 15:17:01 +0200
locking/qspinlock: Use atomic_sub_return_release() in queued_spin_unlock()
The existing version uses a heavy barrier while only release semantics
is required. So use atomic_sub_return_release() instead.
Suggested-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: arnd@...db.de
Cc: waiman.long@...com
Link: http://lkml.kernel.org/r/1464943094-3129-1-git-send-email-xinhui.pan@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
include/asm-generic/qspinlock.h | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index 05f05f1..9f0681b 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -111,10 +111,9 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock)
static __always_inline void queued_spin_unlock(struct qspinlock *lock)
{
/*
- * smp_mb__before_atomic() in order to guarantee release semantics
+ * unlock() needs release semantics:
*/
- smp_mb__before_atomic();
- atomic_sub(_Q_LOCKED_VAL, &lock->val);
+ (void)atomic_sub_return_release(_Q_LOCKED_VAL, &lock->val);
}
#endif
Powered by blists - more mailing lists