[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180709111134.08f57ac5@gandalf.local.home>
Date: Mon, 9 Jul 2018 11:11:34 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Claudio <claudio.fontana@...wa.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: Re: ftrace performance (sched events): cyclictest shows 25% more
latency
On Mon, 9 Jul 2018 16:53:52 +0200
Claudio <claudio.fontana@...wa.com> wrote:
>
> One additional data point, based on brute force again:
>
> I applied this change, in order to understand if it was the
>
> trace_event_raw_event_* (I suppose primarily trace_event_raw_event_switch)
>
> that contained the latency "offenders":
>
> diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
> index 4ecdfe2..969467d 100644
> --- a/include/trace/trace_events.h
> +++ b/include/trace/trace_events.h
> @@ -704,6 +704,8 @@ trace_event_raw_event_##call(void *__data, proto)
> struct trace_event_raw_##call *entry; \
> int __data_size; \
> \
> + return; \
> + \
> if (trace_trigger_soft_disabled(trace_file)) \
> return; \
> \
>
>
> This reduces the latency overhead to 6% down from 25%.
>
> Maybe obvious? Wanted to share in case it helps, and will dig further.
I noticed that just disabling tracing "echo 0 > tracing_on" is very
similar. I'm now recording timings of various parts of the code. But at
most I've seen is a 12us, which should not add the overhead. So it's
triggering something else.
I'll be going on PTO next week, and there's things I must do this week,
thus I may not have much more time to look into this until I get back
from PTO (July 23rd).
-- Steve
Powered by blists - more mailing lists