[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200520152439.GC26470@redhat.com>
Date: Wed, 20 May 2020 17:24:40 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Fox <afox@...hat.com>,
Stephen Johnston <sjohnsto@...hat.com>,
linux-kernel@...r.kernel.org,
Stanislaw Gruszka <sgruszka@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v2] sched/cputime: make scale_stime() more precise
On 05/19, Peter Zijlstra wrote:
>
> > The new implementation does the additional div64_u64_rem() but according
> > to my naive measurements it is faster on x86_64, much faster if rtime/etc
> > are big enough. See
> >
> > https://lore.kernel.org/lkml/20200123130541.GA30620@redhat.com/
>
> Right, so -m32 when ran on x86_64 CPUs isn't really fair, because then
> it still has hardware fls() for ilog2() and a massively fast mult and
> division instruction. Try and run this on a puny 32bit ARM that maybe
> has a hardware multiplier on.
OK,
> Anyway, how about we write it like the below and then when some puny
> architecture comes complaining we can use Linus' original algorithm for
> their arch implementation.
Sure, I am fine either way, but...
> +static inline u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div)
> {
> u64 q;
>
> asm ("mulq %2; divq %3" : "=a" (q)
> - : "a" (a), "rm" ((u64)mul), "rm" ((u64)div)
> + : "a" (a), "rm" (mul), "rm" (div)
> : "rdx");
...
> +#ifndef mul_u64_u64_div_u64
> +static inline u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 c)
> +{
> + u64 res = 0, div, rem;
> + int shift;
> +
> + /* can a * b overflow ? */
> + if (ilog2(a) + ilog2(b) > 62) {
> + /*
> + * (b * a) / c is equal to
> + *
> + * (b / c) * a +
> + * (b % c) * a / c
> + *
> + * if nothing overflows. Can the 1st multiplication
> + * overflow? Yes, but we do not care: this can only
> + * happen if the end result can't fit in u64 anyway.
> + *
> + * So the code below does
> + *
> + * res = (b / c) * a;
> + * b = b % c;
> + */
> + div = div64_u64_rem(b, c, &rem);
> + res = div * a;
> + b = rem;
> +
> + shift = ilog2(a) + ilog2(b) - 62;
> + if (shift > 0) {
> + /* drop precision */
> + b >>= shift;
> + c >>= shift;
> + if (!c)
> + return res;
> + }
> + }
> +
> + return res + div64_u64(a * b, c);
> +}
Note that according to my measurements the "asm" version is slower than
the generic code above when "a * b" doesn't fit u64.
Nevermind, I agree with your version. Will you send this patch or do you
want me to make V3 ?
Oleg.
Powered by blists - more mailing lists