[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <cc13dfdbbcdde12aeebc6970e354b8dbbb81454f063a7861e54fbde1b510dc06@mail.kernel.org>
Date: Fri, 5 Dec 2025 14:01:27 +0000 (UTC)
From: bot+bpf-ci@...nel.org
To: skb99@...ux.ibm.com,bpf@...r.kernel.org,linuxppc-dev@...ts.ozlabs.org,linux-kernel@...r.kernel.org
Cc: hbathini@...ux.ibm.com,sachinpb@...ux.ibm.com,venkat88@...ux.ibm.com,andrii@...nel.org,eddyz87@...il.com,ast@...nel.org,daniel@...earbox.net,martin.lau@...ux.dev,song@...nel.org,yonghong.song@...ux.dev,john.fastabend@...il.com,kpsingh@...nel.org,sdf@...ichev.me,haoluo@...gle.com,jolsa@...nel.org,christophe.leroy@...roup.eu,naveen@...nel.org,maddy@...ux.ibm.com,mpe@...erman.id.au,npiggin@...il.com,ast@...nel.org,andrii@...nel.org,daniel@...earbox.net,martin.lau@...nel.org,eddyz87@...il.com,yonghong.song@...ux.dev,clm@...a.com,ihor.solodrai@...ux.dev
Subject: Re: [PATCH bpf-next v3 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 5e976730b2f5..d53e9cd7563f 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
[ ... ]
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 1fe37128c876..816f9d7d9e5d 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -918,6 +918,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
> case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
> case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
>
> + if (insn_is_mov_percpu_addr(&insn[i])) {
> + if (IS_ENABLED(CONFIG_SMP)) {
> + EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset)));
> + EMIT(PPC_RAW_ADD(dst_reg, src_reg, tmp1_reg));
> + } else if (src_reg != dst_reg) {
> + EMIT(PPC_RAW_MR(dst_reg, src_reg));
> + }
> + }
> +
Does this code need a break statement here? After handling the per-CPU
MOV instruction, execution falls through to the subsequent MOV handling
code. In the common case where dst_reg != src_reg with CONFIG_SMP
enabled, the code first emits ADD to compute the per-CPU address in
dst_reg, but then continues to the final else-if in this case statement
which emits MR dst_reg, src_reg, overwriting the computed address.
Other architectures (x86 and arm64) use break after their per-CPU
instruction handling in bpf_jit_build_body()->BPF_MOV case.
> if (insn_is_cast_user(&insn[i])) {
> EMIT(PPC_RAW_RLDICL_DOT(tmp1_reg, src_reg, 0, 32));
> PPC_LI64(dst_reg, (ctx->user_vm_start & 0xffffffff00000000UL));
> PPC_BCC_SHORT(COND_EQ, (ctx->idx + 2) * 4);
> EMIT(PPC_RAW_OR(tmp1_reg, dst_reg, tmp1_reg));
> EMIT(PPC_RAW_MR(dst_reg, tmp1_reg));
> break;
> }
>
> if (imm == 1) {
> /* special mov32 for zext */
> EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));
> break;
> } else if (off == 8) {
> EMIT(PPC_RAW_EXTSB(dst_reg, src_reg));
> } else if (off == 16) {
> EMIT(PPC_RAW_EXTSH(dst_reg, src_reg));
> } else if (off == 32) {
> EMIT(PPC_RAW_EXTSW(dst_reg, src_reg));
> } else if (dst_reg != src_reg)
^^^^
This condition can be true for per-CPU instructions
(which have off == -1), causing the computed per-CPU
address to be overwritten with src_reg.
> EMIT(PPC_RAW_MR(dst_reg, src_reg));
> goto bpf_alu32_trunc;
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19964922727
Powered by blists - more mailing lists