[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240530223057.21c2a779@rorschach.local.home>
Date: Thu, 30 May 2024 22:30:57 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org
Cc: Masami Hiramatsu <mhiramat@...nel.org>, Mark Rutland
<mark.rutland@....com>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Andrew Morton <akpm@...ux-foundation.org>, Alexei Starovoitov
<alexei.starovoitov@...il.com>, Florent Revest <revest@...omium.org>,
Martin KaFai Lau <martin.lau@...ux.dev>, bpf <bpf@...r.kernel.org>, Sven
Schnelle <svens@...ux.ibm.com>, Alexei Starovoitov <ast@...nel.org>, Jiri
Olsa <jolsa@...nel.org>, Arnaldo Carvalho de Melo <acme@...nel.org>, Daniel
Borkmann <daniel@...earbox.net>, Alan Maguire <alan.maguire@...cle.com>,
Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner
<tglx@...utronix.de>, Guo Ren <guoren@...nel.org>
Subject: Re: [PATCH 10/20] function_graph: Have the instances use their own
ftrace_ops for filtering
On Fri, 24 May 2024 22:37:02 -0400
Steven Rostedt <rostedt@...dmis.org> wrote:
> From: "Steven Rostedt (VMware)" <rostedt@...dmis.org>
>
> Allow for instances to have their own ftrace_ops part of the fgraph_ops
> that makes the funtion_graph tracer filter on the set_ftrace_filter file
> of the instance and not the top instance.
>
> Note that this also requires to update ftrace_graph_func() to call new
> function_graph_enter_ops() instead of function_graph_enter() so that
> it avoid pushing on shadow stack multiple times on the same function.
So I found a major design flaw in this patch.
>
> Co-developed with Masami Hiramatsu:
> Link: https://lore.kernel.org/linux-trace-kernel/171509102088.162236.15758883237657317789.stgit@devnote2
>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
> Signed-off-by: Steven Rostedt (Google) <rostedt@...dmis.org>
> ---
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index 8da0e66ca22d..998558cb8f15 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -648,9 +648,24 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
> struct ftrace_ops *op, struct ftrace_regs *fregs)
> {
> struct pt_regs *regs = &fregs->regs;
> - unsigned long *stack = (unsigned long *)kernel_stack_pointer(regs);
> + unsigned long *parent = (unsigned long *)kernel_stack_pointer(regs);
> + struct fgraph_ops *gops = container_of(op, struct fgraph_ops, ops);
> + int bit;
> +
> + if (unlikely(ftrace_graph_is_dead()))
> + return;
> +
> + if (unlikely(atomic_read(¤t->tracing_graph_pause)))
> + return;
>
> - prepare_ftrace_return(ip, (unsigned long *)stack, 0);
> + bit = ftrace_test_recursion_trylock(ip, *parent);
> + if (bit < 0)
> + return;
> +
> + if (!function_graph_enter_ops(*parent, ip, 0, parent, gops))
So each registered graph ops has its own ftrace_ops which gets
registered with ftrace, so this function does get called in a loop (by
the ftrace iterator function). This means that we would need that code
to detect the function_graph_enter_ops() getting called multiple times
for the same function. This means each fgraph_ops gits its own retstack
on the shadow stack.
I find this a waste of shadow stack resources, and also complicates the
code with having to deal with tail calls and all that.
BUT! There's good news! I also thought about another way of handling
this. I have something working, but requires a bit of rewriting the
code. I should have something out in a day or two.
-- Steve
> + *parent = (unsigned long)&return_to_handler;
> +
> + ftrace_test_recursion_unlock(bit);
> }
> #endif
>
Powered by blists - more mailing lists