[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzbKKmNnqQP0g8OVSgwqb2DTidBpKBjyi-QQJBRJ+-6SWg@mail.gmail.com>
Date: Tue, 13 Jan 2026 17:22:55 -0800
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Menglong Dong <menglong8.dong@...il.com>
Cc: ast@...nel.org, andrii@...nel.org, daniel@...earbox.net,
martin.lau@...ux.dev, eddyz87@...il.com, song@...nel.org,
yonghong.song@...ux.dev, john.fastabend@...il.com, kpsingh@...nel.org,
sdf@...ichev.me, haoluo@...gle.com, jolsa@...nel.org, davem@...emloft.net,
dsahern@...nel.org, tglx@...utronix.de, mingo@...hat.com,
jiang.biao@...ux.dev, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, bpf@...r.kernel.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next v9 06/11] bpf,x86: introduce emit_store_stack_imm64()
for trampoline
On Sat, Jan 10, 2026 at 6:12 AM Menglong Dong <menglong8.dong@...il.com> wrote:
>
> Introduce the helper emit_store_stack_imm64(), which is used to store a
> imm64 to the stack with the help of r0.
>
> Signed-off-by: Menglong Dong <dongml2@...natelecom.cn>
> ---
> v9:
> - rename emit_st_r0_imm64() to emit_store_stack_imm64()
> ---
> arch/x86/net/bpf_jit_comp.c | 15 +++++++++++----
> 1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index e3b1c4b1d550..d94f7038c441 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1300,6 +1300,15 @@ static void emit_st_r12(u8 **pprog, u32 size, u32 dst_reg, int off, int imm)
> emit_st_index(pprog, size, dst_reg, X86_REG_R12, off, imm);
> }
>
> +static void emit_store_stack_imm64(u8 **pprog, int stack_off, u64 imm64)
> +{
> + /* mov rax, imm64
> + * mov QWORD PTR [rbp - stack_off], rax
> + */
> + emit_mov_imm64(pprog, BPF_REG_0, imm64 >> 32, (u32) imm64);
maybe make the caller pass BPF_REG_0 explicitly, it will be more
generic but also more explicit that BPF_REG_0 is used as temporary
register?
> + emit_stx(pprog, BPF_DW, BPF_REG_FP, BPF_REG_0, -stack_off);
why are you negating stack offset here and not in the caller?..
> +}
> +
> static int emit_atomic_rmw(u8 **pprog, u32 atomic_op,
> u32 dst_reg, u32 src_reg, s16 off, u8 bpf_size)
> {
> @@ -3352,16 +3361,14 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
> * mov rax, nr_regs
> * mov QWORD PTR [rbp - nregs_off], rax
> */
> - emit_mov_imm64(&prog, BPF_REG_0, 0, (u32) nr_regs);
> - emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -nregs_off);
> + emit_store_stack_imm64(&prog, nregs_off, nr_regs);
>
> if (flags & BPF_TRAMP_F_IP_ARG) {
> /* Store IP address of the traced function:
> * movabsq rax, func_addr
> * mov QWORD PTR [rbp - ip_off], rax
> */
> - emit_mov_imm64(&prog, BPF_REG_0, (long) func_addr >> 32, (u32) (long) func_addr);
> - emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ip_off);
> + emit_store_stack_imm64(&prog, ip_off, (long)func_addr);
see above, I'd pass BPF_REG_0 and -ip_off (and -nregs_off) explicitly,
too many small transformations are hidden inside
emit_store_stack_imm64(), IMO
> }
>
> save_args(m, &prog, regs_off, false, flags);
> --
> 2.52.0
>
Powered by blists - more mailing lists