[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230818201115.8d191a891174b9657be2ff36@kernel.org>
Date: Fri, 18 Aug 2023 20:11:15 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Florent Revest <revest@...omium.org>
Cc: Alexei Starovoitov <alexei.starovoitov@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
linux-trace-kernel@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
bpf <bpf@...r.kernel.org>, Sven Schnelle <svens@...ux.ibm.com>,
Alexei Starovoitov <ast@...nel.org>,
Jiri Olsa <jolsa@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Alan Maguire <alan.maguire@...cle.com>,
Mark Rutland <mark.rutland@....com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v3 5/8] tracing/fprobe: Enable fprobe events with
CONFIG_DYNAMIC_FTRACE_WITH_ARGS
On Thu, 17 Aug 2023 10:57:50 +0200
Florent Revest <revest@...omium.org> wrote:
> On Sat, Aug 12, 2023 at 7:37 AM Masami Hiramatsu (Google)
> <mhiramat@...nel.org> wrote:
> >
> > diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
> > index d56304276318..6fb4ecf8767d 100644
> > --- a/kernel/trace/Kconfig
> > +++ b/kernel/trace/Kconfig
> > @@ -679,7 +679,6 @@ config FPROBE_EVENTS
> > select TRACING
> > select PROBE_EVENTS
> > select DYNAMIC_EVENTS
> > - depends on DYNAMIC_FTRACE_WITH_REGS
>
> I believe that, in practice, fprobe events still rely on WITH_REGS:
>
> > diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c
> > index f440c97e050f..94c01dc061ec 100644
> > --- a/kernel/trace/trace_fprobe.c
> > +++ b/kernel/trace/trace_fprobe.c
> > @@ -327,14 +328,15 @@ static int fentry_dispatcher(struct fprobe *fp, unsigned long entry_ip,
> > struct pt_regs *regs = ftrace_get_regs(fregs);
>
> Because here you require the entry handler needs ftrace_regs that are
> full pt_regs.
Ah, that is for perf events. Yes, that is the problematic point.
Since perf's interfaces are depending on the pt_regs (especially stacktrace)
I can not remove this part. This is the next issue to be solved.
Maybe we can use partial pt_regs for stack tracing, so we can swap the order
of the patches to introduce ftrace_partial_regs() before this and use it for
perf event.
>
> > int ret = 0;
> >
> > + if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE))
> > + fentry_trace_func(tf, entry_ip, fregs);
> > +
> > +#ifdef CONFIG_PERF_EVENTS
> > if (!regs)
> > return 0;
> >
> > - if (trace_probe_test_flag(&tf->tp, TP_FLAG_TRACE))
> > - fentry_trace_func(tf, entry_ip, regs);
> > -#ifdef CONFIG_PERF_EVENTS
> > if (trace_probe_test_flag(&tf->tp, TP_FLAG_PROFILE))
> > - ret = fentry_perf_func(tf, entry_ip, regs);
> > + ret = fentry_perf_func(tf, entry_ip, fregs, regs);
> > #endif
> > return ret;
> > }
> > @@ -347,14 +349,15 @@ static void fexit_dispatcher(struct fprobe *fp, unsigned long entry_ip,
> > struct trace_fprobe *tf = container_of(fp, struct trace_fprobe, fp);
> > struct pt_regs *regs = ftrace_get_regs(fregs);
>
> And same here with the return handler
>
> I think fprobe events would need the same sort of refactoring as
> kprobe_multi bpf: using ftrace_partial_regs so they work on build
> !WITH_REGS.
Actually, kprobe_multi is using fprobe directly, so this is not related
to bpf part.
Thank you,
--
Masami Hiramatsu (Google) <mhiramat@...nel.org>
Powered by blists - more mailing lists