[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <2f7168be0b92cffe1ddc762914344136ef471220.1745926331.git.namcao@linutronix.de>
Date: Tue, 29 Apr 2025 14:01:05 +0200
From: Nam Cao <namcao@...utronix.de>
To: Steven Rostedt <rostedt@...dmis.org>,
Gabriele Monaco <gmonaco@...hat.com>,
linux-trace-kernel@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: john.ogness@...utronix.de,
Nam Cao <namcao@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
Waiman Long <longman@...hat.com>
Subject: [PATCH v5 20/23] locking/rtmutex: Add block_on_rt_mutex tracepoints
Add block_on_rt_mutex_begin_tp and block_on_rt_mutex_end_tp tracepoints.
They are useful to implement runtime verification monitor which detects
priority inversion.
trace_contention_begin and trace_contention_end are similar to these new
tracepoints, but unfortunately they cannot be used without breaking
userspace:
- userspace tool (perf-lock) assumes "contention_begin" tracepoint means
"current" is contending the lock.
- The runtime verification monitor needs the tracepoint in
rt_mutex_start_proxy_lock(). In this case, it is not "current" who is
contending.
Signed-off-by: Nam Cao <namcao@...utronix.de>
---
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Will Deacon <will@...nel.org>
Cc: Boqun Feng <boqun.feng@...il.com>
Cc: Waiman Long <longman@...hat.com>
---
include/trace/events/lock.h | 8 ++++++++
kernel/locking/rtmutex.c | 2 ++
kernel/locking/rtmutex_api.c | 4 ++++
3 files changed, 14 insertions(+)
diff --git a/include/trace/events/lock.h b/include/trace/events/lock.h
index 8e89baa3775f..d83ec2eaab22 100644
--- a/include/trace/events/lock.h
+++ b/include/trace/events/lock.h
@@ -138,6 +138,14 @@ TRACE_EVENT(contention_end,
TP_printk("%p (ret=%d)", __entry->lock_addr, __entry->ret)
);
+DECLARE_TRACE(block_on_rt_mutex_begin_tp,
+ TP_PROTO(struct task_struct *task),
+ TP_ARGS(task));
+
+DECLARE_TRACE(block_on_rt_mutex_end_tp,
+ TP_PROTO(struct task_struct *task),
+ TP_ARGS(task));
+
#endif /* _TRACE_LOCK_H */
/* This part must be outside protection */
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 4a8df1800cbb..08d33b74be13 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1707,6 +1707,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
set_current_state(state);
trace_contention_begin(lock, LCB_F_RT);
+ trace_block_on_rt_mutex_begin_tp(current);
ret = task_blocks_on_rt_mutex(lock, waiter, current, ww_ctx, chwalk, wake_q);
if (likely(!ret))
@@ -1732,6 +1733,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
fixup_rt_mutex_waiters(lock, true);
trace_contention_end(lock, ret);
+ trace_block_on_rt_mutex_end_tp(current);
return ret;
}
diff --git a/kernel/locking/rtmutex_api.c b/kernel/locking/rtmutex_api.c
index 191e4720e546..35f9bd7cbd54 100644
--- a/kernel/locking/rtmutex_api.c
+++ b/kernel/locking/rtmutex_api.c
@@ -302,6 +302,8 @@ int __sched __rt_mutex_start_proxy_lock(struct rt_mutex_base *lock,
if (try_to_take_rt_mutex(lock, task, NULL))
return 1;
+ trace_block_on_rt_mutex_begin_tp(task);
+
/* We enforce deadlock detection for futexes */
ret = task_blocks_on_rt_mutex(lock, waiter, task, NULL,
RT_MUTEX_FULL_CHAINWALK, wake_q);
@@ -391,6 +393,8 @@ int __sched rt_mutex_wait_proxy_lock(struct rt_mutex_base *lock,
fixup_rt_mutex_waiters(lock, true);
raw_spin_unlock_irq(&lock->wait_lock);
+ trace_block_on_rt_mutex_end_tp(current);
+
return ret;
}
--
2.39.5
Powered by blists - more mailing lists