[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRzs1GGLCm5svW5_@krava>
Date: Tue, 18 Nov 2025 23:01:56 +0100
From: Jiri Olsa <olsajiri@...il.com>
To: Menglong Dong <menglong8.dong@...il.com>
Cc: ast@...nel.org, rostedt@...dmis.org, daniel@...earbox.net,
john.fastabend@...il.com, andrii@...nel.org, martin.lau@...ux.dev,
eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev,
kpsingh@...nel.org, sdf@...ichev.me, haoluo@...gle.com,
mhiramat@...nel.org, mark.rutland@....com,
mathieu.desnoyers@...icios.com, jiang.biao@...ux.dev,
bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next v3 2/6] x86/ftrace: implement
DYNAMIC_FTRACE_WITH_JMP
On Tue, Nov 18, 2025 at 08:36:30PM +0800, Menglong Dong wrote:
> Implement the DYNAMIC_FTRACE_WITH_JMP for x86_64. In ftrace_call_replace,
> we will use JMP32_INSN_OPCODE instead of CALL_INSN_OPCODE if the address
> should use "jmp".
>
> Meanwhile, adjust the direct call in the ftrace_regs_caller. The RSB is
> balanced in the "jmp" mode. Take the function "foo" for example:
>
> original_caller:
> call foo -> foo:
> call fentry -> fentry:
> [do ftrace callbacks ]
> move tramp_addr to stack
> RET -> tramp_addr
> tramp_addr:
> [..]
> call foo_body -> foo_body:
> [..]
> RET -> back to tramp_addr
> [..]
> RET -> back to original_caller
>
> Signed-off-by: Menglong Dong <dongml2@...natelecom.cn>
> ---
> arch/x86/Kconfig | 1 +
> arch/x86/kernel/ftrace.c | 7 ++++++-
> arch/x86/kernel/ftrace_64.S | 12 +++++++++++-
> 3 files changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index fa3b616af03a..462250a20311 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -230,6 +230,7 @@ config X86
> select HAVE_DYNAMIC_FTRACE_WITH_ARGS if X86_64
> select HAVE_FTRACE_REGS_HAVING_PT_REGS if X86_64
> select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> + select HAVE_DYNAMIC_FTRACE_WITH_JMP if X86_64
> select HAVE_SAMPLE_FTRACE_DIRECT if X86_64
> select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64
> select HAVE_EBPF_JIT
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index 4450acec9390..0543b57f54ee 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -74,7 +74,12 @@ static const char *ftrace_call_replace(unsigned long ip, unsigned long addr)
> * No need to translate into a callthunk. The trampoline does
> * the depth accounting itself.
> */
> - return text_gen_insn(CALL_INSN_OPCODE, (void *)ip, (void *)addr);
> + if (ftrace_is_jmp(addr)) {
> + addr = ftrace_jmp_get(addr);
> + return text_gen_insn(JMP32_INSN_OPCODE, (void *)ip, (void *)addr);
> + } else {
> + return text_gen_insn(CALL_INSN_OPCODE, (void *)ip, (void *)addr);
> + }
> }
>
> static int ftrace_verify_code(unsigned long ip, const char *old_code)
> diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
> index 823dbdd0eb41..a132608265f6 100644
> --- a/arch/x86/kernel/ftrace_64.S
> +++ b/arch/x86/kernel/ftrace_64.S
> @@ -285,8 +285,18 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
> ANNOTATE_NOENDBR
> RET
>
> +1:
> + testb $1, %al
> + jz 2f
> + andq $0xfffffffffffffffe, %rax
> + movq %rax, MCOUNT_REG_SIZE+8(%rsp)
> + restore_mcount_regs
> + /* Restore flags */
> + popfq
> + RET
is this hunk the reason for the 0x1 jmp-bit you set in the address?
I wonder if we introduced new flag in dyn_ftrace::flags for this,
then we'd need to have extra ftrace trampoline for jmp ftrace_ops
jirka
Powered by blists - more mailing lists