[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aNLKEJc6Bh-dC3ab@krava>
Date: Tue, 23 Sep 2025 18:25:52 +0200
From: Jiri Olsa <olsajiri@...il.com>
To: Menglong Dong <menglong.dong@...ux.dev>
Cc: Jiri Olsa <olsajiri@...il.com>, mhiramat@...nel.org,
rostedt@...dmis.org, mathieu.desnoyers@...icios.com,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH 2/2] tracing: fprobe: optimization for entry only case
On Tue, Sep 23, 2025 at 09:34:20PM +0800, Menglong Dong wrote:
> On 2025/9/23 20:25, Jiri Olsa wrote:
> > On Tue, Sep 23, 2025 at 07:16:55PM +0800, menglong.dong@...ux.dev wrote:
> > > On 2025/9/23 19:10 Jiri Olsa <olsajiri@...il.com> write:
> > > > On Tue, Sep 23, 2025 at 05:20:01PM +0800, Menglong Dong wrote:
> > > > > For now, fgraph is used for the fprobe, even if we need trace the entry
> > > > > only. However, the performance of ftrace is better than fgraph, and we
> > > > > can use ftrace_ops for this case.
> > > > >
> > > > > Then performance of kprobe-multi increases from 54M to 69M. Before this
> > > > > commit:
> > > > >
> > > > > $ ./benchs/run_bench_trigger.sh kprobe-multi
> > > > > kprobe-multi : 54.663 ± 0.493M/s
> > > > >
> > > > > After this commit:
> > > > >
> > > > > $ ./benchs/run_bench_trigger.sh kprobe-multi
> > > > > kprobe-multi : 69.447 ± 0.143M/s
> > > > >
> > > > > Mitigation is disable during the bench testing above.
> > > > >
> > > > > Signed-off-by: Menglong Dong <dongml2@...natelecom.cn>
> > > > > ---
> > > > > kernel/trace/fprobe.c | 88 +++++++++++++++++++++++++++++++++++++++----
> > > > > 1 file changed, 81 insertions(+), 7 deletions(-)
> > > > >
> > > > > diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> > > > > index 1785fba367c9..de4ae075548d 100644
> > > > > --- a/kernel/trace/fprobe.c
> > > > > +++ b/kernel/trace/fprobe.c
> > > > > @@ -292,7 +292,7 @@ static int fprobe_fgraph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops
> > > > > if (node->addr != func)
> > > > > continue;
> > > > > fp = READ_ONCE(node->fp);
> > > > > - if (fp && !fprobe_disabled(fp))
> > > > > + if (fp && !fprobe_disabled(fp) && fp->exit_handler)
> > > > > fp->nmissed++;
> > > > > }
> > > > > return 0;
> > > > > @@ -312,11 +312,11 @@ static int fprobe_fgraph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops
> > > > > if (node->addr != func)
> > > > > continue;
> > > > > fp = READ_ONCE(node->fp);
> > > > > - if (!fp || fprobe_disabled(fp))
> > > > > + if (unlikely(!fp || fprobe_disabled(fp) || !fp->exit_handler))
> > > > > continue;
> > > > >
> > > > > data_size = fp->entry_data_size;
> > > > > - if (data_size && fp->exit_handler)
> > > > > + if (data_size)
> > > > > data = fgraph_data + used + FPROBE_HEADER_SIZE_IN_LONG;
> > > > > else
> > > > > data = NULL;
> > > > > @@ -327,7 +327,7 @@ static int fprobe_fgraph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops
> > > > > ret = __fprobe_handler(func, ret_ip, fp, fregs, data);
> > > > >
> > > > > /* If entry_handler returns !0, nmissed is not counted but skips exit_handler. */
> > > > > - if (!ret && fp->exit_handler) {
> > > > > + if (!ret) {
> > > > > int size_words = SIZE_IN_LONG(data_size);
> > > > >
> > > > > if (write_fprobe_header(&fgraph_data[used], fp, size_words))
> > > > > @@ -384,6 +384,70 @@ static struct fgraph_ops fprobe_graph_ops = {
> > > > > };
> > > > > static int fprobe_graph_active;
> > > > >
> > > > > +/* ftrace_ops backend (entry-only) */
> > > > > +static void fprobe_ftrace_entry(unsigned long ip, unsigned long parent_ip,
> > > > > + struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > > > > +{
> > > > > + struct fprobe_hlist_node *node;
> > > > > + struct rhlist_head *head, *pos;
> > > > > + struct fprobe *fp;
> > > > > +
> > > > > + guard(rcu)();
> > > > > + head = rhltable_lookup(&fprobe_ip_table, &ip, fprobe_rht_params);
> > > >
> > > > hi,
> > > > so this is based on yout previous patch, right?
> > > > fprobe: use rhltable for fprobe_ip_table
> > > >
> > > > would be better to mention that.. is there latest version of that somewhere?
> > >
> > > Yeah, this is based on that version. That patch is applied
> > > to: https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git/log/?h=probes%2Ffor-next
> > >
> > > And I do the testing on that branches.
> >
> > did you run 'test_progs -t kprobe_multi' ? it silently crashes the
> > kernel for me.. attaching config
>
> Hi. I have tested the whole test_progs and it passed.
>
> In fact, your config will panic even without this patch.
> Please don't enable CONFIG_X86_KERNEL_IBT, the recursion
> of the is_endbr() still exist until this series apply:
>
> tracing: fprobe: Protect return handler from recursion loop
ugh, I thought it's there already, thanks
jirka
Powered by blists - more mailing lists