[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181021093106.mhjpb2rnx6kstjki@ryuk>
Date: Sun, 21 Oct 2018 20:31:06 +1100
From: Aleksa Sarai <cyphar@...har.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Robert Richter <rric@...nel.org>,
Brendan Gregg <bgregg@...flix.com>
Cc: linux-kernel@...r.kernel.org, oprofile-list@...ts.sf.net,
cyphar@...har.com
Subject: [RFC] Merging ftrace_stack, perf_callchain, oprofile->backtrace and
stack_trace
Hi all,
I'm currently working on a patchset to make kretprobes produce
reasonable stack traces[1], and it appears this is a generic problem
across the entire kernel -- you can see the same kretprobe_trampoline()
issue when using ftrace just as much as bpf_trace.
However, in working on this patch, I've noticed that there appear to be
several different implementations of "get the stack trace from this
pt_regs" which all appear quite similar. Namely:
* struct ftrace_stack;
* struct perf_callchain_entry; [**]
* struct stack_trace;
* oprofile_operations->backtrace [This is not related to the kretprobe
problem, very tangential, but since it's usage is not very
complicated -- logging to dmesg -- it wouldn't be too bad to
refactor this one too].
Would there be a strong objection to me trying to merge together these
so that they all use 'struct stack_trace', and no longer have
arch-specific code that is doing (what appears to be) the same unwind
code? Or is this something that was intentionally avoided because there
are some differences that I'm not seeing?
The reason I ask is because the kretprobes patch would require saving
the stacktrace during pre_handler_kretprobe() -- and so in order for all
of the tracing subsystems to take advantage of it they'd need to be able
to use that saved stack trace.
The only other option I can see would be to implement some sort of
translation from 'struct stack_trace' to the others. This wouldn't be
too bad, but I imagine it would be uglier than refactoring them all to
use the same struct.
[**] perf_callchain_entry has the concept of "marking" a context in the
stack trace. But I wonder whether this is something that we could
do with 'struct stack_trace' -- after all it's just magic ->ip
values. *But* then the question is what is the purpose of
sysctl_perf_event_max_contexts_per_stack? It limits the number of
contexts, but isn't that already implicitly limited by the number
of stack entries? We could also implement this with 'struct
stack_trace' but it would require wrapping 'struct stack_trace' to
make it efficient.
[1]: https://github.com/iovisor/bpftrace/issues/101
--
Aleksa Sarai
Senior Software Engineer (Containers)
SUSE Linux GmbH
<https://www.cyphar.com/>
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists