lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aBEGKrZVFHjIgNcl@linux.ibm.com>
Date: Tue, 29 Apr 2025 22:32:34 +0530
From: Saket Kumar Bhaskar <skb99@...ux.ibm.com>
To: Christophe Leroy <christophe.leroy@...roup.eu>
Cc: bpf@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
        linux-kernel@...r.kernel.org, ast@...nel.org, hbathini@...ux.ibm.com,
        andrii@...nel.org, daniel@...earbox.net, martin.lau@...ux.dev,
        eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev,
        john.fastabend@...il.com, kpsingh@...nel.org, sdf@...ichev.me,
        haoluo@...gle.com, jolsa@...nel.org, naveen@...nel.org,
        maddy@...ux.ibm.com, mpe@...erman.id.au, npiggin@...il.com
Subject: Re: [PATCH 2/2] powerpc, bpf: Inline bpf_get_smp_processor_id()

On Tue, Mar 11, 2025 at 06:51:28PM +0100, Christophe Leroy wrote:
> 
> 
> Le 11/03/2025 à 17:09, Saket Kumar Bhaskar a écrit :
> > [Vous ne recevez pas souvent de courriers de skb99@...ux.ibm.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
> > 
> > Inline the calls to bpf_get_smp_processor_id() in the powerpc bpf jit.
> > 
> > powerpc saves the Logical processor number (paca_index) in paca.
> > 
> > Here is how the powerpc JITed assembly changes after this commit:
> > 
> > Before:
> > 
> > cpu = bpf_get_smp_processor_id();
> > 
> > addis 12, 2, -517
> > addi 12, 12, -29456
> > mtctr 12
> > bctrl
> > mr      8, 3
> > 
> > After:
> > 
> > cpu = bpf_get_smp_processor_id();
> > 
> > lhz 8, 8(13)
> > 
> > To evaluate the performance improvements introduced by this change,
> > the benchmark described in [1] was employed.
> > 
> > +---------------+-------------------+-------------------+--------------+
> > |      Name     |      Before       |        After      |   % change   |
> > |---------------+-------------------+-------------------+--------------|
> > | glob-arr-inc  | 41.580 ± 0.034M/s | 54.137 ± 0.019M/s |   + 30.20%   |
> > | arr-inc       | 39.592 ± 0.055M/s | 54.000 ± 0.026M/s |   + 36.39%   |
> > | hash-inc      | 25.873 ± 0.012M/s | 26.334 ± 0.058M/s |   + 1.78%    |
> > +---------------+-------------------+-------------------+--------------+
> > 
> 
> Nice improvement.
> 
> I see that bpf_get_current_task() could be inlined as well, on PPC32 it is
> in r2, on PPC64 it is in paca.
> 
Working on it to inline bpf_get_current_task as well. Will send with v2.
> > [1] https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanakryiko%2Flinux%2Fcommit%2F8dec900975ef&data=05%7C02%7Cchristophe.leroy%40csgroup.eu%7C1d1f40ce41344cf1ecf508dd60b73ae0%7C8b87af7d86474dc78df45f69a2011bb5%7C0%7C0%7C638773062267813839%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=T%2BG206FHtW7hhFT1%2BXxRwN7pc%2BRzu8SiMlZ5njIlhB8%3D&reserved=0
> > 
> > Signed-off-by: Saket Kumar Bhaskar <skb99@...ux.ibm.com>
> > ---
> >   arch/powerpc/net/bpf_jit_comp.c   | 10 ++++++++++
> >   arch/powerpc/net/bpf_jit_comp64.c |  5 +++++
> >   2 files changed, 15 insertions(+)
> > 
> > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> > index 3d4bd45a9a22..4b79b2d95469 100644
> > --- a/arch/powerpc/net/bpf_jit_comp.c
> > +++ b/arch/powerpc/net/bpf_jit_comp.c
> > @@ -445,6 +445,16 @@ bool bpf_jit_supports_percpu_insn(void)
> >          return true;
> >   }
> > 
> > +bool bpf_jit_inlines_helper_call(s32 imm)
> > +{
> > +       switch (imm) {
> > +       case BPF_FUNC_get_smp_processor_id:
> > +               return true;
> > +       default:
> > +               return false;
> > +       }
> > +}
> 
> What about PPC32 ?
> 
Will send v2 for PPC64 as of now.
> 
> > +
> >   void *arch_alloc_bpf_trampoline(unsigned int size)
> >   {
> >          return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> > diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> > index 06f06770ceea..a8de12c026da 100644
> > --- a/arch/powerpc/net/bpf_jit_comp64.c
> > +++ b/arch/powerpc/net/bpf_jit_comp64.c
> > @@ -1087,6 +1087,11 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
> >                  case BPF_JMP | BPF_CALL:
> >                          ctx->seen |= SEEN_FUNC;
> > 
> > +                       if (insn[i].src_reg == 0 && imm == BPF_FUNC_get_smp_processor_id) {
> 
> Please use BPF_REG_0 instead of just 0.
> 
Acknowledged
> > +                               EMIT(PPC_RAW_LHZ(bpf_to_ppc(BPF_REG_0), _R13, offsetof(struct paca_struct, paca_index)));
> 
> Can just use 'src_reg' instead of 'bpf_to_ppc(BPF_REG_0)'
> 
Will include this in v2.
> > +                               break;
> > +                       }
> > +
> >                          ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,
> >                                                      &func_addr, &func_addr_fixed);
> >                          if (ret < 0)
> > --
> > 2.43.5
> > 
> 
Thanks for reviewing Chris.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ