[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241124235019.274562787@goodmis.org>
Date: Sun, 24 Nov 2024 18:49:45 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: linux-kernel@...r.kernel.org
Cc: Masami Hiramatsu <mhiramat@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Michael Jeanson <mjeanson@...icios.com>,
Peter Zijlstra <peterz@...radead.org>,
Alexei Starovoitov <ast@...nel.org>,
Yonghong Song <yhs@...com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
Andrii Nakryiko <andrii.nakryiko@...il.com>,
bpf@...r.kernel.org,
Joel Fernandes <joel@...lfernandes.org>,
Jordan Rife <jrife@...gle.com>,
linux-trace-kernel@...r.kernel.org
Subject: [for-next][PATCH 5/6] tracing: Remove conditional locking from __DO_TRACE()
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Remove conditional locking by moving the __DO_TRACE() code into
trace_##name().
When the faultable syscall tracepoints were implemented, __DO_TRACE()
had a rcuidle argument which selected between SRCU and preempt disable.
Therefore, the RCU tasks trace protection for faultable syscall
tracepoints was introduced using the same pattern.
At that point, it did not appear obvious that this feedback from Linus [1]
applied here as well, because the __DO_TRACE() modification was
extending a pre-existing pattern.
Shortly before pulling the faultable syscall tracepoints modifications,
Steven removed the rcuidle argument and SRCU protection scheme entirely
from tracepoint.h:
commit 48bcda684823 ("tracing: Remove definition of trace_*_rcuidle()")
This required a rebase of the faultable syscall tracepoints series,
which missed a perfect opportunity to integrate the prior recommendation
from Linus.
In response to the pull request, Linus pointed out [2] that he was not
pleased by the implementation, expecting this to be fixed in a follow up
patch series.
Move __DO_TRACE() code into trace_##name() within each of
__DECLARE_TRACE() and __DECLARE_TRACE_SYSCALL(). Use a scoped guard
to guard the preempt disable notrace and RCU tasks trace critical
sections.
Link: https://lore.kernel.org/all/CAHk-=wggDLDeTKbhb5hh--x=-DQd69v41137M72m6NOTmbD-cw@mail.gmail.com/ [1]
Link: https://lore.kernel.org/lkml/CAHk-=witPrLcu22dZ93VCyRQonS7+-dFYhQbna=KBa-TAhayMw@mail.gmail.com/ [2]
Fixes: a363d27cdbc2 ("tracing: Allow system call tracepoints to handle page faults")
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Michael Jeanson <mjeanson@...icios.com>
Cc: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Alexei Starovoitov <ast@...nel.org>
Cc: Yonghong Song <yhs@...com>
Cc: Paul E. McKenney <paulmck@...nel.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Mark Rutland <mark.rutland@....com>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: Namhyung Kim <namhyung@...nel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: bpf@...r.kernel.org
Cc: Joel Fernandes <joel@...lfernandes.org>
Cc: Jordan Rife <jrife@...gle.com>
Cc: linux-trace-kernel@...r.kernel.org
Link: https://lore.kernel.org/20241123153031.2884933-5-mathieu.desnoyers@efficios.com
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@...dmis.org>
---
include/linux/tracepoint.h | 45 ++++++++++----------------------------
1 file changed, 12 insertions(+), 33 deletions(-)
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 867f3c1ac7dc..832f49b56b1f 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -209,31 +209,6 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
#define __DO_TRACE_CALL(name, args) __traceiter_##name(NULL, args)
#endif /* CONFIG_HAVE_STATIC_CALL */
-/*
- * With @syscall=0, the tracepoint callback array dereference is
- * protected by disabling preemption.
- * With @syscall=1, the tracepoint callback array dereference is
- * protected by Tasks Trace RCU, which allows probes to handle page
- * faults.
- */
-#define __DO_TRACE(name, args, cond, syscall) \
- do { \
- if (!(cond)) \
- return; \
- \
- if (syscall) \
- rcu_read_lock_trace(); \
- else \
- preempt_disable_notrace(); \
- \
- __DO_TRACE_CALL(name, TP_ARGS(args)); \
- \
- if (syscall) \
- rcu_read_unlock_trace(); \
- else \
- preempt_enable_notrace(); \
- } while (0)
-
/*
* Make sure the alignment of the structure in the __tracepoints section will
* not add unwanted padding between the beginning of the section and the
@@ -282,10 +257,12 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), cond, PARAMS(data_proto)) \
static inline void trace_##name(proto) \
{ \
- if (static_branch_unlikely(&__tracepoint_##name.key)) \
- __DO_TRACE(name, \
- TP_ARGS(args), \
- TP_CONDITION(cond), 0); \
+ if (static_branch_unlikely(&__tracepoint_##name.key)) { \
+ if (cond) { \
+ scoped_guard(preempt_notrace) \
+ __DO_TRACE_CALL(name, TP_ARGS(args)); \
+ } \
+ } \
if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \
WARN_ONCE(!rcu_is_watching(), \
"RCU not watching for tracepoint"); \
@@ -297,10 +274,12 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
static inline void trace_##name(proto) \
{ \
might_fault(); \
- if (static_branch_unlikely(&__tracepoint_##name.key)) \
- __DO_TRACE(name, \
- TP_ARGS(args), \
- TP_CONDITION(cond), 1); \
+ if (static_branch_unlikely(&__tracepoint_##name.key)) { \
+ if (cond) { \
+ scoped_guard(rcu_tasks_trace) \
+ __DO_TRACE_CALL(name, TP_ARGS(args)); \
+ } \
+ } \
if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \
WARN_ONCE(!rcu_is_watching(), \
"RCU not watching for tracepoint"); \
--
2.45.2
Powered by blists - more mailing lists