[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220208184208.79303-13-namhyung@kernel.org>
Date: Tue, 8 Feb 2022 10:42:08 -0800
From: Namhyung Kim <namhyung@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
Byungchul Park <byungchul.park@....com>,
"Paul E. McKenney" <paul.mckenney@...aro.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Radoslaw Burny <rburny@...gle.com>
Subject: [PATCH 12/12] locking: Move lock_acquired() from the fast path
The lock_acquired() function is used by CONFIG_LOCK_STAT to track wait
time for contended locks. So it's meaningful only if the given lock
is in the slow path (contended). Let's move the call into the if
block so that we can skip it in the fast path. This also move the
tracepoint to be called only after lock_contended().
It might affect bounce_acquired stats rarely (if it's on a different
cpu than when you call lock_acquire) but I'm not sure it's possible in
uncontended cases. Otherwise, this should have no functional changes
in the LOCKDEP and LOCK_STAT.
Userspace tools that use the tracepoint might see the difference, but
I think most of them can handle the missing lock_acquired() in
non-contended case properly as it's the case when using a trylock
function to grab a lock. At least it seems ok for the perf
tools ('perf lock' command specifically).
Add similar change in the __mutex_lock_common() so that it can call
lock_acquired() only after lock_contended().
Signed-off-by: Namhyung Kim <namhyung@...nel.org>
---
Documentation/locking/lockstat.rst | 4 ++--
include/linux/lockdep.h | 12 ++++++------
kernel/locking/mutex.c | 4 +---
3 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/Documentation/locking/lockstat.rst b/Documentation/locking/lockstat.rst
index 536eab8dbd99..3638ad1113c2 100644
--- a/Documentation/locking/lockstat.rst
+++ b/Documentation/locking/lockstat.rst
@@ -28,11 +28,11 @@ The graph below shows the relation between the lock functions and the various
| __contended
| |
| <wait>
+ | |
+ | __acquired
| _______/
|/
|
- __acquired
- |
.
<hold>
.
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 4e728d2957db..63b75ad2e17c 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -559,8 +559,8 @@ do { \
if (!try(_lock)) { \
lock_contended(&(_lock)->dep_map, _RET_IP_); \
lock(_lock); \
+ lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} \
- lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} while (0)
#define LOCK_CONTENDED_RETURN(_lock, try, lock) \
@@ -569,9 +569,9 @@ do { \
if (!try(_lock)) { \
lock_contended(&(_lock)->dep_map, _RET_IP_); \
____err = lock(_lock); \
+ if (!____err) \
+ lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} \
- if (!____err) \
- lock_acquired(&(_lock)->dep_map, _RET_IP_); \
____err; \
})
@@ -600,8 +600,8 @@ do { \
if (!try(_lock)) { \
lock_contended(&(_lock)->dep_map, _RET_IP_); \
lock(_lock); \
+ lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} \
- lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} while (0)
#define LOCK_CONTENDED_RETURN(_lock, try, lock) \
@@ -610,9 +610,9 @@ do { \
if (!try(_lock)) { \
lock_contended(&(_lock)->dep_map, _RET_IP_); \
____err = lock(_lock); \
+ if (!____err) \
+ lock_acquired(&(_lock)->dep_map, _RET_IP_); \
} \
- if (!____err) \
- lock_acquired(&(_lock)->dep_map, _RET_IP_); \
____err; \
})
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index f8bc4ae312a0..e67b5a16440b 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -605,8 +605,6 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
if (__mutex_trylock(lock) ||
mutex_optimistic_spin(lock, ww_ctx, NULL)) {
- /* got the lock, yay! */
- lock_acquired(&lock->dep_map, ip);
if (ww_ctx)
ww_mutex_set_context_fastpath(ww, ww_ctx);
preempt_enable();
@@ -708,10 +706,10 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
debug_mutex_free_waiter(&waiter);
-skip_wait:
/* got the lock - cleanup and rejoice! */
lock_acquired(&lock->dep_map, ip);
+skip_wait:
if (ww_ctx)
ww_mutex_lock_acquired(ww, ww_ctx);
--
2.35.0.263.gb82422642f-goog
Powered by blists - more mailing lists