lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aNKAIsHQZySyrV4o@krava>
Date: Tue, 23 Sep 2025 13:10:26 +0200
From: Jiri Olsa <olsajiri@...il.com>
To: Menglong Dong <menglong8.dong@...il.com>
Cc: mhiramat@...nel.org, rostedt@...dmis.org,
	mathieu.desnoyers@...icios.com, linux-kernel@...r.kernel.org,
	linux-trace-kernel@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [PATCH 2/2] tracing: fprobe: optimization for entry only case

On Tue, Sep 23, 2025 at 05:20:01PM +0800, Menglong Dong wrote:
> For now, fgraph is used for the fprobe, even if we need trace the entry
> only. However, the performance of ftrace is better than fgraph, and we
> can use ftrace_ops for this case.
> 
> Then performance of kprobe-multi increases from 54M to 69M. Before this
> commit:
> 
>   $ ./benchs/run_bench_trigger.sh kprobe-multi
>   kprobe-multi   :   54.663 ± 0.493M/s
> 
> After this commit:
> 
>   $ ./benchs/run_bench_trigger.sh kprobe-multi
>   kprobe-multi   :   69.447 ± 0.143M/s
> 
> Mitigation is disable during the bench testing above.
> 
> Signed-off-by: Menglong Dong <dongml2@...natelecom.cn>
> ---
>  kernel/trace/fprobe.c | 88 +++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 81 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> index 1785fba367c9..de4ae075548d 100644
> --- a/kernel/trace/fprobe.c
> +++ b/kernel/trace/fprobe.c
> @@ -292,7 +292,7 @@ static int fprobe_fgraph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops
>  				if (node->addr != func)
>  					continue;
>  				fp = READ_ONCE(node->fp);
> -				if (fp && !fprobe_disabled(fp))
> +				if (fp && !fprobe_disabled(fp) && fp->exit_handler)
>  					fp->nmissed++;
>  			}
>  			return 0;
> @@ -312,11 +312,11 @@ static int fprobe_fgraph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops
>  		if (node->addr != func)
>  			continue;
>  		fp = READ_ONCE(node->fp);
> -		if (!fp || fprobe_disabled(fp))
> +		if (unlikely(!fp || fprobe_disabled(fp) || !fp->exit_handler))
>  			continue;
>  
>  		data_size = fp->entry_data_size;
> -		if (data_size && fp->exit_handler)
> +		if (data_size)
>  			data = fgraph_data + used + FPROBE_HEADER_SIZE_IN_LONG;
>  		else
>  			data = NULL;
> @@ -327,7 +327,7 @@ static int fprobe_fgraph_entry(struct ftrace_graph_ent *trace, struct fgraph_ops
>  			ret = __fprobe_handler(func, ret_ip, fp, fregs, data);
>  
>  		/* If entry_handler returns !0, nmissed is not counted but skips exit_handler. */
> -		if (!ret && fp->exit_handler) {
> +		if (!ret) {
>  			int size_words = SIZE_IN_LONG(data_size);
>  
>  			if (write_fprobe_header(&fgraph_data[used], fp, size_words))
> @@ -384,6 +384,70 @@ static struct fgraph_ops fprobe_graph_ops = {
>  };
>  static int fprobe_graph_active;
>  
> +/* ftrace_ops backend (entry-only) */
> +static void fprobe_ftrace_entry(unsigned long ip, unsigned long parent_ip,
> +	struct ftrace_ops *ops, struct ftrace_regs *fregs)
> +{
> +	struct fprobe_hlist_node *node;
> +	struct rhlist_head *head, *pos;
> +	struct fprobe *fp;
> +
> +	guard(rcu)();
> +	head = rhltable_lookup(&fprobe_ip_table, &ip, fprobe_rht_params);

hi,
so this is based on yout previous patch, right?
  fprobe: use rhltable for fprobe_ip_table

would be better to mention that..  is there latest version of that somewhere?

thanks,
jirka

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ