[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251217061435.802204-4-duanchenghao@kylinos.cn>
Date: Wed, 17 Dec 2025 14:14:31 +0800
From: Chenghao Duan <duanchenghao@...inos.cn>
To: yangtiezhu@...ngson.cn,
rostedt@...dmis.org,
mhiramat@...nel.org,
mark.rutland@....com,
hengqi.chen@...il.com,
chenhuacai@...nel.org
Cc: kernel@...0n.name,
zhangtianyang@...ngson.cn,
masahiroy@...nel.org,
linux-kernel@...r.kernel.org,
loongarch@...ts.linux.dev,
bpf@...r.kernel.org,
duanchenghao@...inos.cn,
youling.tang@...ux.dev,
jianghaoran@...inos.cn,
vincent.mc.li@...il.com,
linux-trace-kernel@...r.kernel.org
Subject: [PATCH v4 3/7] LoongArch: BPF: Enable and fix trampoline-based tracing for module functions
Remove the previous restrictions that blocked the tracing of kernel
module functions. Fix the issue that previously caused kernel lockups
when attempting to trace module functions.
Before entering the trampoline code, the return address register ra
shall store the address of the next assembly instruction after the
'bl trampoline' instruction, which is the traced function address, and
the register t0 shall store the parent function return address. Refine
the trampoline return logic to ensure that register data remains
correct when returning to both the traced function and the parent
function.
Before this patch was applied, the module_attach test in selftests/bpf
encountered a deadlock issue. This was caused by an incorrect jump
address after the trampoline execution, which resulted in an infinite
loop within the module function.
Fixes: 677e6123e3d2 ("LoongArch: BPF: Disable trampoline for kernel module function trace")
Signed-off-by: Chenghao Duan <duanchenghao@...inos.cn>
---
arch/loongarch/net/bpf_jit.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
index 8dc58781b8eb..76cd24646bec 100644
--- a/arch/loongarch/net/bpf_jit.c
+++ b/arch/loongarch/net/bpf_jit.c
@@ -1265,7 +1265,7 @@ static int emit_jump_or_nops(void *target, void *ip, u32 *insns, bool is_call)
return 0;
}
- return emit_jump_and_link(&ctx, is_call ? LOONGARCH_GPR_T0 : LOONGARCH_GPR_ZERO, (u64)target);
+ return emit_jump_and_link(&ctx, is_call ? LOONGARCH_GPR_RA : LOONGARCH_GPR_ZERO, (u64)target);
}
static int emit_call(struct jit_ctx *ctx, u64 addr)
@@ -1622,14 +1622,12 @@ static int __arch_prepare_bpf_trampoline(struct jit_ctx *ctx, struct bpf_tramp_i
/* To traced function */
/* Ftrace jump skips 2 NOP instructions */
- if (is_kernel_text((unsigned long)orig_call))
+ if (is_kernel_text((unsigned long)orig_call) ||
+ is_module_text_address((unsigned long)orig_call))
orig_call += LOONGARCH_FENTRY_NBYTES;
/* Direct jump skips 5 NOP instructions */
else if (is_bpf_text_address((unsigned long)orig_call))
orig_call += LOONGARCH_BPF_FENTRY_NBYTES;
- /* Module tracing not supported - cause kernel lockups */
- else if (is_module_text_address((unsigned long)orig_call))
- return -ENOTSUPP;
if (flags & BPF_TRAMP_F_CALL_ORIG) {
move_addr(ctx, LOONGARCH_GPR_A0, (const u64)im);
@@ -1722,12 +1720,16 @@ static int __arch_prepare_bpf_trampoline(struct jit_ctx *ctx, struct bpf_tramp_i
emit_insn(ctx, ldd, LOONGARCH_GPR_FP, LOONGARCH_GPR_SP, 0);
emit_insn(ctx, addid, LOONGARCH_GPR_SP, LOONGARCH_GPR_SP, 16);
- if (flags & BPF_TRAMP_F_SKIP_FRAME)
+ if (flags & BPF_TRAMP_F_SKIP_FRAME) {
/* return to parent function */
- emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_RA, 0);
- else
- /* return to traced function */
+ move_reg(ctx, LOONGARCH_GPR_RA, LOONGARCH_GPR_T0);
emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T0, 0);
+ } else {
+ /* return to traced function */
+ move_reg(ctx, LOONGARCH_GPR_T1, LOONGARCH_GPR_RA);
+ move_reg(ctx, LOONGARCH_GPR_RA, LOONGARCH_GPR_T0);
+ emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T1, 0);
+ }
}
ret = ctx->idx;
--
2.25.1
Powered by blists - more mailing lists