lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOnJCUK7KTO0n_8wDYDxh2ULtEvMWF-re2dEX6nng_tX1AGgPw@mail.gmail.com>
Date:   Wed, 20 Sep 2023 14:27:31 -0700
From:   Atish Patra <atishp@...shpatra.org>
To:     Conor Dooley <conor@...nel.org>
Cc:     Evan Green <evan@...osinc.com>,
        Anup Patel <apatel@...tanamicro.com>,
        Albert Ou <aou@...s.berkeley.edu>,
        Heiko Stuebner <heiko@...ech.de>,
        Marc Zyngier <maz@...nel.org>, linux-kernel@...r.kernel.org,
        Palmer Dabbelt <palmer@...osinc.com>,
        Conor Dooley <conor.dooley@...rochip.com>,
        David Laight <David.Laight@...lab.com>,
        Palmer Dabbelt <palmer@...belt.com>,
        Jisheng Zhang <jszhang@...nel.org>,
        Paul Walmsley <paul.walmsley@...ive.com>,
        Greentime Hu <greentime.hu@...ive.com>,
        linux-riscv@...ts.infradead.org,
        Andrew Jones <ajones@...tanamicro.com>
Subject: Re: [PATCH v2] RISC-V: Probe misaligned access speed in parallel

On Wed, Sep 20, 2023 at 2:04 PM Conor Dooley <conor@...nel.org> wrote:
>
> Yo,
>
> On Wed, Sep 20, 2023 at 12:38:01PM -0700, Evan Green wrote:
> > Probing for misaligned access speed takes about 0.06 seconds. On a
> > system with 64 cores, doing this in smp_callin() means it's done
> > serially, extending boot time by 3.8 seconds. That's a lot of boot time.
> >
> > Instead of measuring each CPU serially, let's do the measurements on
> > all CPUs in parallel. If we disable preemption on all CPUs, the
> > jiffies stop ticking, so we can do this in stages of 1) everybody
> > except core 0, then 2) core 0.
> >
> > The measurement call in smp_callin() stays around, but is now
> > conditionalized to only run if a new CPU shows up after the round of
> > in-parallel measurements has run. The goal is to have the measurement
> > call not run during boot or suspend/resume, but only on a hotplug
> > addition.
> >
> > Reported-by: Jisheng Zhang <jszhang@...nel.org>
> > Closes: https://lore.kernel.org/all/mhng-9359993d-6872-4134-83ce-c97debe1cf9a@palmer-ri-x1c9/T/#mae9b8f40016f9df428829d33360144dc5026bcbf
> > Fixes: 584ea6564bca ("RISC-V: Probe for unaligned access speed")
> > Signed-off-by: Evan Green <evan@...osinc.com>
> > Reviewed-by: Andrew Jones <ajones@...tanamicro.com>
> > Tested-by: Andrew Jones <ajones@...tanamicro.com>
> >
> > ---
> >
> > Changes in v2:
> >  - Removed new global, used system_state == SYSTEM_RUNNING instead
> >    (Jisheng)
> >  - Added tags
> >
> >  arch/riscv/include/asm/cpufeature.h |  2 +-
> >  arch/riscv/kernel/cpufeature.c      | 22 +++++++++++++++++-----
> >  arch/riscv/kernel/smpboot.c         | 11 ++++++++++-
> >  3 files changed, 28 insertions(+), 7 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
> > index d0345bd659c9..b139796392d0 100644
> > --- a/arch/riscv/include/asm/cpufeature.h
> > +++ b/arch/riscv/include/asm/cpufeature.h
> > @@ -30,6 +30,6 @@ DECLARE_PER_CPU(long, misaligned_access_speed);
> >  /* Per-cpu ISA extensions. */
> >  extern struct riscv_isainfo hart_isa[NR_CPUS];
> >
> > -void check_unaligned_access(int cpu);
> > +int check_unaligned_access(void *unused);
> >
> >  #endif
> > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
> > index 1cfbba65d11a..40bb854fcb96 100644
> > --- a/arch/riscv/kernel/cpufeature.c
> > +++ b/arch/riscv/kernel/cpufeature.c
> > @@ -556,8 +556,9 @@ unsigned long riscv_get_elf_hwcap(void)
> >       return hwcap;
> >  }
> >
> > -void check_unaligned_access(int cpu)
> > +int check_unaligned_access(void *unused)
> >  {
> > +     int cpu = smp_processor_id();
> >       u64 start_cycles, end_cycles;
> >       u64 word_cycles;
> >       u64 byte_cycles;
> > @@ -571,7 +572,7 @@ void check_unaligned_access(int cpu)
> >       page = alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE));
> >       if (!page) {
> >               pr_warn("Can't alloc pages to measure memcpy performance");
> > -             return;
> > +             return 0;
>
> Dumb question maybe, but I am limited setup wise at the moment due to
> a hardware failure which makes checking stuff hard, why the signature
> change? Requirement for on_each_cpu()?
>

Requirement for smp_call_on_cpu.

> >       }
> >
> >       /* Make an unaligned destination buffer. */
> > @@ -643,15 +644,26 @@ void check_unaligned_access(int cpu)
> >
> >  out:
> >       __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE));
> > +     return 0;
> > +}
> > +
> > +static void check_unaligned_access_nonboot_cpu(void *param)
> > +{
> > +     if (smp_processor_id() != 0)
> > +             check_unaligned_access(param);
> >  }
> >
> > -static int check_unaligned_access_boot_cpu(void)
> > +static int check_unaligned_access_all_cpus(void)
> >  {
> > -     check_unaligned_access(0);
> > +     /* Check everybody except 0, who stays behind to tend jiffies. */
> > +     on_each_cpu(check_unaligned_access_nonboot_cpu, NULL, 1);
> > +
> > +     /* Check core 0. */
> > +     smp_call_on_cpu(0, check_unaligned_access, NULL, true);
> >       return 0;
>
> Why does this function return an int if it can only return 0?
>

Should we define a check_unaligned_access_boot_cpu to avoid the return
type change ?
We can also get rid of the unused parameter in the
check_unaligned_access function.

I also noticed on_each_cpu invokes the function within preempt_disable/enable.
check_unaligned_access will invoke it again. It's probably harmless
but there is no need for non-boot cpus.

We can just have preempt_disable/enable around check_unaligned_access
in the check_unaligned_access_boot_cpu function.

> Cheers,
> Conor.
>
> >  }
> >
> > -arch_initcall(check_unaligned_access_boot_cpu);
> > +arch_initcall(check_unaligned_access_all_cpus);
> >
> >  #ifdef CONFIG_RISCV_ALTERNATIVE
> >  /*
> > diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
> > index 1b8da4e40a4d..a014955b8699 100644
> > --- a/arch/riscv/kernel/smpboot.c
> > +++ b/arch/riscv/kernel/smpboot.c
> > @@ -27,6 +27,7 @@
> >  #include <linux/sched/mm.h>
> >  #include <asm/cpu_ops.h>
> >  #include <asm/cpufeature.h>
> > +#include <asm/hwprobe.h>
> >  #include <asm/irq.h>
> >  #include <asm/mmu_context.h>
> >  #include <asm/numa.h>
> > @@ -246,7 +247,15 @@ asmlinkage __visible void smp_callin(void)
> >
> >       numa_add_cpu(curr_cpuid);
> >       set_cpu_online(curr_cpuid, 1);
> > -     check_unaligned_access(curr_cpuid);
> > +
> > +     /*
> > +      * Boot-time misaligned access speed measurements are done in parallel
> > +      * in an initcall. Only measure here for hotplug.
> > +      */
> > +     if ((system_state == SYSTEM_RUNNING) &&
> > +         (per_cpu(misaligned_access_speed, curr_cpuid) == RISCV_HWPROBE_MISALIGNED_UNKNOWN)) {
> > +             check_unaligned_access(NULL);
> > +     }
> >
> >       if (has_vector()) {
> >               if (riscv_v_setup_vsize())
> > --
> > 2.34.1
> >
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@...ts.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv



-- 
Regards,
Atish

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ