lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBEFc8YnuGozsdvD@linux.ibm.com>
Date: Tue, 29 Apr 2025 22:29:31 +0530
From: Saket Kumar Bhaskar <skb99@...ux.ibm.com>
To: Christophe Leroy <christophe.leroy@...roup.eu>
Cc: bpf@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
        linux-kernel@...r.kernel.org, ast@...nel.org, hbathini@...ux.ibm.com,
        andrii@...nel.org, daniel@...earbox.net, martin.lau@...ux.dev,
        eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev,
        john.fastabend@...il.com, kpsingh@...nel.org, sdf@...ichev.me,
        haoluo@...gle.com, jolsa@...nel.org, naveen@...nel.org,
        maddy@...ux.ibm.com, mpe@...erman.id.au, npiggin@...il.com
Subject: Re: [PATCH 1/2] powerpc, bpf: Support internal-only MOV instruction
 to resolve per-CPU addrs

On Tue, Mar 11, 2025 at 06:38:23PM +0100, Christophe Leroy wrote:
> 
> 
> Le 11/03/2025 à 17:09, Saket Kumar Bhaskar a écrit :
> > [Vous ne recevez pas souvent de courriers de skb99@...ux.ibm.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
> > 
> > With the introduction of commit 7bdbf7446305 ("bpf: add special
> > internal-only MOV instruction to resolve per-CPU addrs"),
> > a new BPF instruction BPF_MOV64_PERCPU_REG has been added to
> > resolve absolute addresses of per-CPU data from their per-CPU
> > offsets. This update requires enabling support for this
> > instruction in the powerpc JIT compiler.
> > 
> > As of commit 7a0268fa1a36 ("[PATCH] powerpc/64: per cpu data
> > optimisations"), the per-CPU data offset for the CPU is stored in
> > the paca.
> > 
> > To support this BPF instruction in the powerpc JIT, the following
> > powerpc instructions are emitted:
> > 
> > mr dst_reg, src_reg             //Move src_reg to dst_reg, if src_reg != dst_reg
> > ld tmp1_reg, 48(13)             //Load per-CPU data offset from paca(r13) in tmp1_reg.
> > add dst_reg, dst_reg, tmp1_reg  //Add the per cpu offset to the dst.
> 
> Why not do:
> 
>   add dst_reg, src_reg, tmp1_reg
> 
> instead of a combination of 'mr' and 'add' ?
> 
Will do it in v2. 
> > 
> > To evaluate the performance improvements introduced by this change,
> > the benchmark described in [1] was employed.
> > 
> > Before Change:
> > glob-arr-inc   :   41.580 ± 0.034M/s
> > arr-inc        :   39.592 ± 0.055M/s
> > hash-inc       :   25.873 ± 0.012M/s
> > 
> > After Change:
> > glob-arr-inc   :   42.024 ± 0.049M/s
> > arr-inc        :   55.447 ± 0.031M/s
> > hash-inc       :   26.565 ± 0.014M/s
> > 
> > [1] https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7Ca4bc35a9cb49457fb5cc08dd60b73783%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638773062200197453%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=1t2Bc3w6Ye0u33UNEjsSAv114HDOGNXmk1I%2Fxt7K2sc%3D&reserved=0
> > 
> > Signed-off-by: Saket Kumar Bhaskar <skb99@...ux.ibm.com>
> > ---
> >   arch/powerpc/net/bpf_jit_comp.c   | 5 +++++
> >   arch/powerpc/net/bpf_jit_comp64.c | 8 ++++++++
> >   2 files changed, 13 insertions(+)
> > 
> > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> > index 2991bb171a9b..3d4bd45a9a22 100644
> > --- a/arch/powerpc/net/bpf_jit_comp.c
> > +++ b/arch/powerpc/net/bpf_jit_comp.c
> > @@ -440,6 +440,11 @@ bool bpf_jit_supports_far_kfunc_call(void)
> >          return IS_ENABLED(CONFIG_PPC64);
> >   }
> > 
> > +bool bpf_jit_supports_percpu_insn(void)
> > +{
> > +       return true;
> > +}
> > +
> 
> What about PPC32 ?
> 
Right now we will enable it for PPC64. So will modify the return statement accordingly.
> >   void *arch_alloc_bpf_trampoline(unsigned int size)
> >   {
> >          return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> > diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> > index 233703b06d7c..06f06770ceea 100644
> > --- a/arch/powerpc/net/bpf_jit_comp64.c
> > +++ b/arch/powerpc/net/bpf_jit_comp64.c
> > @@ -679,6 +679,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
> >                   */
> >                  case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
> >                  case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
> > +                       if (insn_is_mov_percpu_addr(&insn[i])) {
> > +                               if (dst_reg != src_reg)
> > +                                       EMIT(PPC_RAW_MR(dst_reg, src_reg));
> 
> Shouldn't be needed except for the non-SMP case maybe.
> 
Acknowledged.
> > +#ifdef CONFIG_SMP
> > +                               EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset)));
> > +                               EMIT(PPC_RAW_ADD(dst_reg, dst_reg, tmp1_reg));
> 
> Can use src_reg as first operand instead of dst_reg
> 
Will include this in v2.
> > +#endif
> 
> data_offset always exists in paca_struct, please use IS_ENABLED(CONFIG_SMP)
> instead of #ifdef
> 
> > +                       }
> >                          if (imm == 1) {
> >                                  /* special mov32 for zext */
> >                                  EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));
> > --
> > 2.43.5
> > 
> 
Thanks for reviewing Chris.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ