[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241223184618.176607694@goodmis.org>
Date: Mon, 23 Dec 2024 13:46:18 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Cc: Masami Hiramatsu <mhiramat@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: [PATCH 0/4] ftrace: Graph tracing performance enhancements and clean ups.
ftrace update for 6.14:
I've always known that function graph tracing had a significant overhead
compared to function tracing. More than just double (since it also
traces the return of functions). I never looked too deep into the cause,
but I just noticed one of the reasons. As supposed to the function tracer,
the function graph tracer disables interrupts for every function entry
and exit it traces. It has been doing this since it was created back in
2008! A lot has changed since then, and there's no reason to disable
interrupts as it can handle recursion. It also forces a disable that
even prevents NMIs from being traced (so function graph tracing will
drop NMI functions if it preempts a current trace).
This is totally unneeded, especially since most of the complex code of
the shadow stack has been removed from the function graph tracer and
put into the fgraph.c file. The function graph tracer is only one consumer
of that code today, and the disabling was due to protect the code that
it no longer calls.
Remove the interrupt disabling as well as forcing a disable of the
function graph tracer while it is being recorded. This gives a significant
improvement to function graph tracing.
Running perf stat on "hackbench 10".
Before: 4.8423 +- 0.0892 seconds time elapsed ( +- 1.84% )
After: 3.490 +- 0.112 seconds time elapsed ( +- 3.22% )
That's ~28% speed up!
Do the same for the function profiler as well.
Before: 3.332 +- 0.101 seconds time elapsed ( +- 3.04% )
After: 1.9590 +- 0.0484 seconds time elapsed ( +- 2.47% )
Which gives ~41% speed up!!!
I may mark those commits with Fixes tags for performance only.
I would not mark them for stable as I would like to verify a bit
more that they do not cause any regressions. But for those that are
interested in better performance from the function graph tracer, it
may be good to backport.
The other two changes clean up the ftrace goto code. There were three
places that simply did a goto to the end of the function that returned
a value. No unlocking, no freeing, just a simple return. That was
changed to just return in place.
The second change was to implement the guard() logic around mutexes.
All of these commits are agnostic from each other and may be applied
separately.
Steven Rostedt (4):
fgraph: Remove unnecessary disabling of interrupts and recursion
ftrace: Do not disable interrupts in profiler
ftrace: Remove unneeded goto jumps
ftrace: Switch ftrace.c code over to use guard()
----
kernel/trace/ftrace.c | 130 ++++++++++++-----------------------
kernel/trace/trace_functions_graph.c | 37 ++++------
2 files changed, 60 insertions(+), 107 deletions(-)
Powered by blists - more mailing lists