[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180724102316.41cdb8a1@gandalf.local.home>
Date: Tue, 24 Jul 2018 10:23:16 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Claudio <claudio.fontana@...wa.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: Re: ftrace global trace_pipe_raw
On Tue, 24 Jul 2018 11:58:18 +0200
Claudio <claudio.fontana@...wa.com> wrote:
> Hello Steven,
>
> I am doing correlation of linux sched events, following all tasks between cpus,
> and one thing that would be really convenient would be to have a global
> trace_pipe_raw, in addition to the per-cpu ones, with already sorted events.
>
> I would imagine the core functionality is already available, since trace_pipe
> in the tracing directory already shows all events regardless of CPU, and so
> it would be a matter of doing the same for trace_pipe_raw.
The difference between trace_pipe and trace_pipe_raw is that trace_pipe
is post processed, and reads the per CPU buffers and interleaves them
one event at a time. The trace_pipe_raw just sends you the raw
unprocessed data directly from the buffers, which are grouped per CPU.
>
> But is there a good reason why trace_pipe_raw is available only per-cpu?
Yes, because it maps the ring buffers themselves without any post
processing.
>
> Would work in the direction of adding a global trace_pipe_raw be considered
> for inclusion?
The design of the lockless ring buffer requires not to be preempted,
and that the data cannot be written to from more than one location. To
do so, we make a per CPU buffer, and disable preemption when writing.
This means that we have only one writer at a time. It can handle
interrupts and NMIs, because they will finish before they return and
this doesn't break the algorithm. But having writers from multiple CPUs
would require locking or other heaving synchronization operations that
will greatly reduce the speed of writing to the buffers (not to mention
the cache thrashing).
-- Steve
Powered by blists - more mailing lists