lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251118141934.ddf14aabf371d0939415b588@kernel.org>
Date: Tue, 18 Nov 2025 14:19:34 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Menglong Dong <menglong8.dong@...il.com>
Cc: ast@...nel.org, rostedt@...dmis.org, daniel@...earbox.net,
 john.fastabend@...il.com, andrii@...nel.org, martin.lau@...ux.dev,
 eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev,
 kpsingh@...nel.org, sdf@...ichev.me, haoluo@...gle.com, jolsa@...nel.org,
 mhiramat@...nel.org, mark.rutland@....com, mathieu.desnoyers@...icios.com,
 jiang.biao@...ux.dev, bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
 linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 1/6] ftrace: introduce FTRACE_OPS_FL_JMP

On Mon, 17 Nov 2025 11:49:01 +0800
Menglong Dong <menglong8.dong@...il.com> wrote:

> For now, the "nop" will be replaced with a "call" instruction when a
> function is hooked by the ftrace. However, sometimes the "call" can break
> the RSB and introduce extra overhead. Therefore, introduce the flag
> FTRACE_OPS_FL_JMP, which indicate that the ftrace_ops should be called
> with a "jmp" instead of "call". For now, it is only used by the direct
> call case.
> 
> When a direct ftrace_ops is marked with FTRACE_OPS_FL_JMP, the last bit of
> the ops->direct_call will be set to 1. Therefore, we can tell if we should
> use "jmp" for the callback in ftrace_call_replace().
> 

nit: Is it sure the last bit is always 0?
At least register_ftrace_direct() needs to reject if @addr
parameter has the last bit.

Thanks,


> Signed-off-by: Menglong Dong <dongml2@...natelecom.cn>
> ---
>  include/linux/ftrace.h | 33 +++++++++++++++++++++++++++++++++
>  kernel/trace/Kconfig   | 12 ++++++++++++
>  kernel/trace/ftrace.c  |  9 ++++++++-
>  3 files changed, 53 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index 07f8c309e432..015dd1049bea 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -359,6 +359,7 @@ enum {
>  	FTRACE_OPS_FL_DIRECT			= BIT(17),
>  	FTRACE_OPS_FL_SUBOP			= BIT(18),
>  	FTRACE_OPS_FL_GRAPH			= BIT(19),
> +	FTRACE_OPS_FL_JMP			= BIT(20),
>  };
>  
>  #ifndef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
> @@ -577,6 +578,38 @@ static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs,
>  						 unsigned long addr) { }
>  #endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
>  
> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
> +static inline bool ftrace_is_jmp(unsigned long addr)
> +{
> +	return addr & 1;
> +}
> +
> +static inline unsigned long ftrace_jmp_set(unsigned long addr)
> +{
> +	return addr | 1UL;
> +}
> +
> +static inline unsigned long ftrace_jmp_get(unsigned long addr)
> +{
> +	return addr & ~1UL;
> +}
> +#else
> +static inline bool ftrace_is_jmp(unsigned long addr)
> +{
> +	return false;
> +}
> +
> +static inline unsigned long ftrace_jmp_set(unsigned long addr)
> +{
> +	return addr;
> +}
> +
> +static inline unsigned long ftrace_jmp_get(unsigned long addr)
> +{
> +	return addr;
> +}
> +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_JMP */
> +
>  #ifdef CONFIG_STACK_TRACER
>  
>  int stack_trace_sysctl(const struct ctl_table *table, int write, void *buffer,
> diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
> index d2c79da81e4f..4661b9e606e0 100644
> --- a/kernel/trace/Kconfig
> +++ b/kernel/trace/Kconfig
> @@ -80,6 +80,12 @@ config HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
>  	  If the architecture generates __patchable_function_entries sections
>  	  but does not want them included in the ftrace locations.
>  
> +config HAVE_DYNAMIC_FTRACE_WITH_JMP
> +	bool
> +	help
> +	  If the architecture supports to replace the __fentry__ with a
> +	  "jmp" instruction.
> +
>  config HAVE_SYSCALL_TRACEPOINTS
>  	bool
>  	help
> @@ -330,6 +336,12 @@ config DYNAMIC_FTRACE_WITH_ARGS
>  	depends on DYNAMIC_FTRACE
>  	depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS
>  
> +config DYNAMIC_FTRACE_WITH_JMP
> +	def_bool y
> +	depends on DYNAMIC_FTRACE
> +	depends on DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> +	depends on HAVE_DYNAMIC_FTRACE_WITH_JMP
> +
>  config FPROBE
>  	bool "Kernel Function Probe (fprobe)"
>  	depends on HAVE_FUNCTION_GRAPH_FREGS && HAVE_FTRACE_GRAPH_FUNC
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 59cfacb8a5bb..a6c060a4f50b 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -5951,7 +5951,8 @@ static void remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long
>  	for (i = 0; i < size; i++) {
>  		hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
>  			del = __ftrace_lookup_ip(direct_functions, entry->ip);
> -			if (del && del->direct == addr) {
> +			if (del && ftrace_jmp_get(del->direct) ==
> +				   ftrace_jmp_get(addr)) {
>  				remove_hash_entry(direct_functions, del);
>  				kfree(del);
>  			}
> @@ -6018,6 +6019,9 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
>  
>  	mutex_lock(&direct_mutex);
>  
> +	if (ops->flags & FTRACE_OPS_FL_JMP)
> +		addr = ftrace_jmp_set(addr);
> +
>  	/* Make sure requested entries are not already registered.. */
>  	size = 1 << hash->size_bits;
>  	for (i = 0; i < size; i++) {
> @@ -6138,6 +6142,9 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
>  
>  	lockdep_assert_held_once(&direct_mutex);
>  
> +	if (ops->flags & FTRACE_OPS_FL_JMP)
> +		addr = ftrace_jmp_set(addr);
> +
>  	/* Enable the tmp_ops to have the same functions as the direct ops */
>  	ftrace_ops_init(&tmp_ops);
>  	tmp_ops.func_hash = ops->func_hash;
> -- 
> 2.51.2
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ