lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Mar 2021 19:24:43 +0100
From:   Sedat Dilek <sedat.dilek@...il.com>
To:     Nick Desaulniers <ndesaulniers@...gle.com>
Cc:     Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Nathan Chancellor <nathan@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        clang-built-linux <clang-built-linux@...glegroups.com>,
        Clement Courbet <courbet@...gle.com>,
        Oleg Rombakh <olegrom@...gle.com>,
        Bill Wendling <morbo@...gle.com>
Subject: Re: [PATCH v2] sched: Optimize __calc_delta.

On Thu, Mar 4, 2021 at 6:34 PM 'Nick Desaulniers' via Clang Built
Linux <clang-built-linux@...glegroups.com> wrote:
>
> On Wed, Mar 3, 2021 at 2:48 PM Josh Don <joshdon@...gle.com> wrote:
> >
> > From: Clement Courbet <courbet@...gle.com>
> >
> > A significant portion of __calc_delta time is spent in the loop
> > shifting a u64 by 32 bits. Use `fls` instead of iterating.
> >
> > This is ~7x faster on benchmarks.
> >
> > The generic `fls` implementation (`generic_fls`) is still ~4x faster
> > than the loop.
> > Architectures that have a better implementation will make use of it. For
> > example, on X86 we get an additional factor 2 in speed without dedicated
> > implementation.
> >
> > On gcc, the asm versions of `fls` are about the same speed as the
> > builtin. On clang, the versions that use fls are more than twice as
> > slow as the builtin. This is because the way the `fls` function is
> > written, clang puts the value in memory:
> > https://godbolt.org/z/EfMbYe. This bug is filed at
> > https://bugs.llvm.org/show_bug.cgi?id=49406.
>
> Hi Josh, Thanks for helping get this patch across the finish line.
> Would you mind updating the commit message to point to
> https://bugs.llvm.org/show_bug.cgi?id=20197?
>
> >
> > ```
> > name                                   cpu/op
> > BM_Calc<__calc_delta_loop>             9.57ms ±12%
> > BM_Calc<__calc_delta_generic_fls>      2.36ms ±13%
> > BM_Calc<__calc_delta_asm_fls>          2.45ms ±13%
> > BM_Calc<__calc_delta_asm_fls_nomem>    1.66ms ±12%
> > BM_Calc<__calc_delta_asm_fls64>        2.46ms ±13%
> > BM_Calc<__calc_delta_asm_fls64_nomem>  1.34ms ±15%
> > BM_Calc<__calc_delta_builtin>          1.32ms ±11%
> > ```
> >
> > Signed-off-by: Clement Courbet <courbet@...gle.com>
> > Signed-off-by: Josh Don <joshdon@...gle.com>
> > ---
> >  kernel/sched/fair.c  | 19 +++++++++++--------
> >  kernel/sched/sched.h |  1 +
> >  2 files changed, 12 insertions(+), 8 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a8bd7b13634..a691371960ae 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -229,22 +229,25 @@ static void __update_inv_weight(struct load_weight *lw)
> >  static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
> >  {
> >         u64 fact = scale_load_down(weight);
> > +       u32 fact_hi = (u32)(fact >> 32);
> >         int shift = WMULT_SHIFT;
> > +       int fs;
> >
> >         __update_inv_weight(lw);
> >
> > -       if (unlikely(fact >> 32)) {
> > -               while (fact >> 32) {
> > -                       fact >>= 1;
> > -                       shift--;
> > -               }
> > +       if (unlikely(fact_hi)) {
> > +               fs = fls(fact_hi);
> > +               shift -= fs;
> > +               fact >>= fs;
> >         }
> >
> >         fact = mul_u32_u32(fact, lw->inv_weight);
> >
> > -       while (fact >> 32) {
> > -               fact >>= 1;
> > -               shift--;
> > +       fact_hi = (u32)(fact >> 32);
> > +       if (fact_hi) {
> > +               fs = fls(fact_hi);
> > +               shift -= fs;
> > +               fact >>= fs;
> >         }
> >
> >         return mul_u64_u32_shr(delta_exec, fact, shift);
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 10a1522b1e30..714af71cf983 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -36,6 +36,7 @@
> >  #include <uapi/linux/sched/types.h>
> >
> >  #include <linux/binfmts.h>
> > +#include <linux/bitops.h>
>
> This hunk of the patch is curious.  I assume that bitops.h is needed
> for fls(); if so, why not #include it in kernel/sched/fair.c?
> Otherwise this potentially hurts compile time for all TUs that include
> kernel/sched/sched.h.
>

I have v2 as-is in my custom patchset and booted right now on bare metal.

As Nick points out moving the include makes sense to me.
We have a lot of include at the wrong places increasing build-time.

- Sedat -

> >  #include <linux/blkdev.h>
> >  #include <linux/compat.h>
> >  #include <linux/context_tracking.h>
> > --
> > 2.30.1.766.gb4fecdf3b7-goog
> >
>
>
> --
> Thanks,
> ~Nick Desaulniers
>
> --
> You received this message because you are subscribed to the Google Groups "Clang Built Linux" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to clang-built-linux+unsubscribe@...glegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/clang-built-linux/CAKwvOdmijctJfM3gNfwEVjaQyp3LZkhnAwgsT7EBhsSBJyfLAA%40mail.gmail.com.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ