[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-eaff0e7003cca6c2748b67ead2d4b1a8ad858fc7@git.kernel.org>
Date: Mon, 29 Feb 2016 03:20:33 -0800
From: tip-bot for Waiman Long <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: akpm@...ux-foundation.org, paulmck@...ux.vnet.ibm.com,
hpa@...or.com, peterz@...radead.org, scott.norton@....com,
mingo@...nel.org, torvalds@...ux-foundation.org,
tglx@...utronix.de, linux-kernel@...r.kernel.org,
doug.hatch@....com, Waiman.Long@....com
Subject: [tip:locking/core] locking/pvqspinlock: Move lock stealing count
tracking code into pv_queued_spin_steal_lock()
Commit-ID: eaff0e7003cca6c2748b67ead2d4b1a8ad858fc7
Gitweb: http://git.kernel.org/tip/eaff0e7003cca6c2748b67ead2d4b1a8ad858fc7
Author: Waiman Long <Waiman.Long@....com>
AuthorDate: Thu, 10 Dec 2015 15:17:46 -0500
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 29 Feb 2016 10:02:41 +0100
locking/pvqspinlock: Move lock stealing count tracking code into pv_queued_spin_steal_lock()
This patch moves the lock stealing count tracking code into
pv_queued_spin_steal_lock() instead of via a jacket function simplifying
the code.
Signed-off-by: Waiman Long <Waiman.Long@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Douglas Hatch <doug.hatch@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Scott J Norton <scott.norton@....com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/1449778666-13593-3-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/locking/qspinlock_paravirt.h | 16 +++++++++-------
kernel/locking/qspinlock_stat.h | 13 -------------
2 files changed, 9 insertions(+), 20 deletions(-)
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index 87bb235..78f04a2 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -55,6 +55,11 @@ struct pv_node {
};
/*
+ * Include queued spinlock statistics code
+ */
+#include "qspinlock_stat.h"
+
+/*
* By replacing the regular queued_spin_trylock() with the function below,
* it will be called once when a lock waiter enter the PV slowpath before
* being queued. By allowing one lock stealing attempt here when the pending
@@ -65,9 +70,11 @@ struct pv_node {
static inline bool pv_queued_spin_steal_lock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
+ int ret = !(atomic_read(&lock->val) & _Q_LOCKED_PENDING_MASK) &&
+ (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0);
- return !(atomic_read(&lock->val) & _Q_LOCKED_PENDING_MASK) &&
- (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0);
+ qstat_inc(qstat_pv_lock_stealing, ret);
+ return ret;
}
/*
@@ -138,11 +145,6 @@ static __always_inline int trylock_clear_pending(struct qspinlock *lock)
#endif /* _Q_PENDING_BITS == 8 */
/*
- * Include queued spinlock statistics code
- */
-#include "qspinlock_stat.h"
-
-/*
* Lock and MCS node addresses hash table for fast lookup
*
* Hashing is done on a per-cacheline basis to minimize the need to access
diff --git a/kernel/locking/qspinlock_stat.h b/kernel/locking/qspinlock_stat.h
index 640dcec..869988d 100644
--- a/kernel/locking/qspinlock_stat.h
+++ b/kernel/locking/qspinlock_stat.h
@@ -279,19 +279,6 @@ static inline void __pv_wait(u8 *ptr, u8 val)
#define pv_kick(c) __pv_kick(c)
#define pv_wait(p, v) __pv_wait(p, v)
-/*
- * PV unfair trylock count tracking function
- */
-static inline int qstat_spin_steal_lock(struct qspinlock *lock)
-{
- int ret = pv_queued_spin_steal_lock(lock);
-
- qstat_inc(qstat_pv_lock_stealing, ret);
- return ret;
-}
-#undef queued_spin_trylock
-#define queued_spin_trylock(l) qstat_spin_steal_lock(l)
-
#else /* CONFIG_QUEUED_LOCK_STAT */
static inline void qstat_inc(enum qlock_stats stat, bool cond) { }
Powered by blists - more mailing lists