lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a14025c-6cd5-4e04-b49d-cd65b8b35e68@kernel.org>
Date: Tue, 25 Nov 2025 11:02:12 +0100
From: "Christophe Leroy (CS GROUP)" <chleroy@...nel.org>
To: Saket Kumar Bhaskar <skb99@...ux.ibm.com>, bpf@...r.kernel.org,
 linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Cc: hbathini@...ux.ibm.com, sachinpb@...ux.ibm.com, venkat88@...ux.ibm.com,
 andrii@...nel.org, eddyz87@...il.com, ast@...nel.org, daniel@...earbox.net,
 martin.lau@...ux.dev, song@...nel.org, yonghong.song@...ux.dev,
 john.fastabend@...il.com, kpsingh@...nel.org, sdf@...ichev.me,
 haoluo@...gle.com, jolsa@...nel.org, naveen@...nel.org, maddy@...ux.ibm.com,
 mpe@...erman.id.au, npiggin@...il.com
Subject: Re: [PATCH bpf-next v2 1/2] powerpc64/bpf: Support internal-only MOV
 instruction to resolve per-CPU addrs



Le 17/11/2025 à 07:52, Saket Kumar Bhaskar a écrit :
> With the introduction of commit 7bdbf7446305 ("bpf: add special
> internal-only MOV instruction to resolve per-CPU addrs"),
> a new BPF instruction BPF_MOV64_PERCPU_REG has been added to
> resolve absolute addresses of per-CPU data from their per-CPU
> offsets. This update requires enabling support for this
> instruction in the powerpc JIT compiler.
> 
> As of commit 7a0268fa1a36 ("[PATCH] powerpc/64: per cpu data
> optimisations"), the per-CPU data offset for the CPU is stored in
> the paca.
> 
> To support this BPF instruction in the powerpc JIT, the following
> powerpc instructions are emitted:
> 
> ld tmp1_reg, 48(13)		//Load per-CPU data offset from paca(r13) in tmp1_reg.
> add dst_reg, src_reg, tmp1_reg	//Add the per cpu offset to the dst.
> mr dst_reg, src_reg		//Move src_reg to dst_reg, if src_reg != dst_reg

Must be something wrong here. The 'add' was done into the dst_reg so 
here you erase the result of the addition by the source register.

> 
> To evaluate the performance improvements introduced by this change,
> the benchmark described in [1] was employed.
> 
> Before Change:
> glob-arr-inc   :   41.580 ± 0.034M/s
> arr-inc        :   39.592 ± 0.055M/s
> hash-inc       :   25.873 ± 0.012M/s
> 
> After Change:
> glob-arr-inc   :   42.024 ± 0.049M/s
> arr-inc        :   55.447 ± 0.031M/s
> hash-inc       :   26.565 ± 0.014M/s
> 
> [1] https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7C2f16cef7d35341c9683608de25a5ee3b%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638989591756011820%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=dNHc3FAFJkZpHq%2Be1hTv5CfBrGEXxWTKrLGSHaUw%2BRk%3D&reserved=0
> 
> Signed-off-by: Saket Kumar Bhaskar <skb99@...ux.ibm.com>
> ---
>   arch/powerpc/net/bpf_jit_comp.c   | 5 +++++
>   arch/powerpc/net/bpf_jit_comp64.c | 9 +++++++++
>   2 files changed, 14 insertions(+)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 88ad5ba7b87f..2f2230ae2145 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -466,6 +466,11 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena)
>   	return true;
>   }
>   
> +bool bpf_jit_supports_percpu_insn(void)
> +{
> +	return IS_ENABLED(CONFIG_PPC64);
> +}
> +
>   void *arch_alloc_bpf_trampoline(unsigned int size)
>   {
>   	return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 1fe37128c876..21486706b5ea 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -918,6 +918,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
>   		case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
>   		case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
>   
> +			if (insn_is_mov_percpu_addr(&insn[i])) {
> +				if (IS_ENABLED(CONFIG_SMP)) {
> +					EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset)));
> +					EMIT(PPC_RAW_ADD(dst_reg, src_reg, tmp1_reg));
> +				} else {
> +					EMIT(PPC_RAW_MR(dst_reg, src_reg));

You should make sure dst_reg is different from src_reg before emitting 
this, you may otherwise generate one of the following instructions that 
change the thread priority:

#define HMT_very_low()		asm volatile("or 31, 31, 31	# very low priority")
#define HMT_low()		asm volatile("or 1, 1, 1	# low priority")
#define HMT_medium_low()	asm volatile("or 6, 6, 6	# medium low priority")
#define HMT_medium()		asm volatile("or 2, 2, 2	# medium priority")
#define HMT_medium_high()	asm volatile("or 5, 5, 5	# medium high priority")
#define HMT_high()		asm volatile("or 3, 3, 3	# high priority")


> +				}
> +			}
> +
>   			if (insn_is_cast_user(&insn[i])) {
>   				EMIT(PPC_RAW_RLDICL_DOT(tmp1_reg, src_reg, 0, 32));
>   				PPC_LI64(dst_reg, (ctx->user_vm_start & 0xffffffff00000000UL));


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ