[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191210183519.41772e0f@gandalf.local.home>
Date: Tue, 10 Dec 2019 18:35:19 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Alexei Starovoitov <ast@...nel.org>
Cc: <davem@...emloft.net>, <daniel@...earbox.net>, <x86@...nel.org>,
<netdev@...r.kernel.org>, <bpf@...r.kernel.org>,
<kernel-team@...com>
Subject: Re: [PATCH bpf 1/3] ftrace: Fix function_graph tracer interaction
with BPF trampoline
On Sun, 8 Dec 2019 16:01:12 -0800
Alexei Starovoitov <ast@...nel.org> wrote:
> #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
> index 67e0c462b059..a2659735db73 100644
> --- a/kernel/trace/fgraph.c
> +++ b/kernel/trace/fgraph.c
> @@ -101,6 +101,15 @@ int function_graph_enter(unsigned long ret, unsigned long func,
> {
> struct ftrace_graph_ent trace;
>
> + /*
> + * Skip graph tracing if the return location is served by direct trampoline,
> + * since call sequence and return addresses is unpredicatable anymore.
> + * Ex: BPF trampoline may call original function and may skip frame
> + * depending on type of BPF programs attached.
> + */
> + if (ftrace_direct_func_count &&
> + ftrace_find_rec_direct(ret - MCOUNT_INSN_SIZE))
My only worry is that this may not work for all archs that implement
it. But I figure we can cross that bridge when we get to it.
> + return -EBUSY;
> trace.func = func;
> trace.depth = ++current->curr_ret_depth;
>
I added this patch to my queue and it's about 70% done going through my
test suite (takes around 10 - 13 hours).
As I'm about to send a pull request to Linus tomorrow, I could include
this patch (as it will be fully tested), and then you could apply the
other two when it hits Linus's tree.
Would that work for you?
-- Steve
Powered by blists - more mailing lists