[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <os833s47-15so-979n-1o8s-22n20q096182@syhkavp.arg>
Date: Thu, 26 Jun 2025 23:48:23 -0400 (EDT)
From: Nicolas Pitre <nico@...xnic.net>
To: David Laight <david.laight.linux@...il.com>
cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
u.kleine-koenig@...libre.com, Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Biju Das <biju.das.jz@...renesas.com>
Subject: Re: [PATCH v3 next 09/10] lib: mul_u64_u64_div_u64() Optimise the
divide code
On Thu, 26 Jun 2025, David Laight wrote:
> On Tue, 17 Jun 2025 21:33:23 -0400 (EDT)
> Nicolas Pitre <nico@...xnic.net> wrote:
>
> > On Tue, 17 Jun 2025, Nicolas Pitre wrote:
> >
> > > On Sat, 14 Jun 2025, David Laight wrote:
> > >
> > > > Replace the bit by bit algorithm with one that generates 16 bits
> > > > per iteration on 32bit architectures and 32 bits on 64bit ones.
> > > >
> > > > On my zen 5 this reduces the time for the tests (using the generic
> > > > code) from ~3350ns to ~1000ns.
> > > >
> > > > Running the 32bit algorithm on 64bit x86 takes ~1500ns.
> > > > It'll be slightly slower on a real 32bit system, mostly due
> > > > to register pressure.
> > > >
> > > > The savings for 32bit x86 are much higher (tested in userspace).
> > > > The worst case (lots of bits in the quotient) drops from ~900 clocks
> > > > to ~130 (pretty much independant of the arguments).
> > > > Other 32bit architectures may see better savings.
> > > >
> > > > It is possibly to optimise for divisors that span less than
> > > > __LONG_WIDTH__/2 bits. However I suspect they don't happen that often
> > > > and it doesn't remove any slow cpu divide instructions which dominate
> > > > the result.
> > > >
> > > > Signed-off-by: David Laight <david.laight.linux@...il.com>
> > >
> > > Nice work. I had to be fully awake to review this one.
> > > Some suggestions below.
> >
> > Here's a patch with my suggestions applied to make it easier to figure
> > them out. The added "inline" is necessary to fix compilation on ARM32.
> > The "likely()" makes for better assembly and this part is pretty much
> > likely anyway. I've explained the rest previously, although this is a
> > better implementation.
>
> I've been trying to find time to do some more performance measurements.
> I did a few last night and managed to realise at least some of what
> is happening.
>
> I've dropped the responses in here since there is a copy of the code.
>
> The 'TLDR' is that the full divide takes my zen5 about 80 clocks
> (including some test red tape) provided the branch-predictor is
> predicting correctly.
> I get that consistently for repeats of the same division, and moving
> onto the next one - provided they take the same path.
> (eg for the last few 'random' value tests.)
> However if I include something that takes a different path (eg divisor
> doesn't have the top bit set, or the product has enough high zero bits
> to take the 'optimised' path then the first two or three repeats of the
> next division are at least 20 clocks slower.
> In the kernel these calls aren't going to be common enough (and with
> similar inputs) to train the branch predictor.
> So the mispredicted branches really start to dominate.
> This is similar to the issue that Linus keeps mentioning about kernel
> code being mainly 'cold cache'.
>
> >
> > commit 99ea338401f03efe5dbebe57e62bd7c588409c5c
> > Author: Nicolas Pitre <nico@...xnic.net>
> > Date: Tue Jun 17 14:42:34 2025 -0400
> >
> > fixup! lib: mul_u64_u64_div_u64() Optimise the divide code
> >
> > diff --git a/lib/math/div64.c b/lib/math/div64.c
> > index 3c9fe878ce68..740e59a58530 100644
> > --- a/lib/math/div64.c
> > +++ b/lib/math/div64.c
> > @@ -188,7 +188,7 @@ EXPORT_SYMBOL(iter_div_u64_rem);
> >
> > #if !defined(mul_u64_add_u64_div_u64) || defined(test_mul_u64_add_u64_div_u64)
> >
> > -static u64 mul_add(u32 a, u32 b, u32 c)
> > +static inline u64 mul_add(u32 a, u32 b, u32 c)
>
> If inline is needed, it need to be always_inline.
Here "inline" was needed only because I got a "defined but unused"
complaint from the compiler in some cases. The "always_inline" is not
absolutely necessary.
> > {
> > return add_u64_u32(mul_u32_u32(a, b), c);
> > }
> > @@ -246,7 +246,7 @@ static inline u32 mul_u64_long_add_u64(u64 *p_lo, u64 a, u32 b, u64 c)
> >
> > u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)
> > {
> > - unsigned long d_msig, q_digit;
> > + unsigned long n_long, d_msig, q_digit;
> > unsigned int reps, d_z_hi;
> > u64 quotient, n_lo, n_hi;
> > u32 overflow;
> > @@ -271,36 +271,21 @@ u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)
> >
> > /* Left align the divisor, shifting the dividend to match */
> > d_z_hi = __builtin_clzll(d);
> > - if (d_z_hi) {
> > + if (likely(d_z_hi)) {
>
> I don't think likely() helps much.
> But for 64bit this can be unconditional...
It helps if you look at how gcc orders the code. Without the likely, the
normalization code is moved out of line and a conditional forward branch
(predicted untaken by default) is used to reach it even though this code
is quite likely. With likely() gcc inserts the normalization
code in line and a conditional forward branch (predicted untaken by
default) is used to skip over that code. Clearly we want the later.
> > d <<= d_z_hi;
> > n_hi = n_hi << d_z_hi | n_lo >> (64 - d_z_hi);
>
> ... replace 'n_lo >> (64 - d_z_hi)' with 'n_lo >> (63 - d_z_hi) >> 1'.
Why? I don't see the point.
> The extra shift probably costs too much on 32bit though.
>
>
> > n_lo <<= d_z_hi;
> > }
> >
> > - reps = 64 / BITS_PER_ITER;
> > - /* Optimise loop count for small dividends */
> > - if (!(u32)(n_hi >> 32)) {
> > - reps -= 32 / BITS_PER_ITER;
> > - n_hi = n_hi << 32 | n_lo >> 32;
> > - n_lo <<= 32;
> > - }
> > -#if BITS_PER_ITER == 16
> > - if (!(u32)(n_hi >> 48)) {
> > - reps--;
> > - n_hi = add_u64_u32(n_hi << 16, n_lo >> 48);
> > - n_lo <<= 16;
> > - }
> > -#endif
>
> You don't want the branch in the loop.
> Once predicted correctly is makes little difference where you put
> the test - but that won't be 'normal'
> Unless it gets trained odd-even it'll get mispredicted at least once.
> For 64bit (zen5) the test outside the loop was less bad
> (IIRC dropped to 65 clocks - so probably worth it).
> I need to check an older cpu (got a few Intel ones).
>
>
> > -
> > /* Invert the dividend so we can use add instead of subtract. */
> > n_lo = ~n_lo;
> > n_hi = ~n_hi;
> >
> > /*
> > - * Get the most significant BITS_PER_ITER bits of the divisor.
> > + * Get the rounded-up most significant BITS_PER_ITER bits of the divisor.
> > * This is used to get a low 'guestimate' of the quotient digit.
> > */
> > - d_msig = (d >> (64 - BITS_PER_ITER)) + 1;
> > + d_msig = (d >> (64 - BITS_PER_ITER)) + !!(d << BITS_PER_ITER);
>
> For 64bit x64 that is pretty much zero cost
> (gcc manages to use the 'Z' flag from the '<< 32' in a 'setne' to get a 1
> I didn't look hard enough to find where the zero register came from
> since 'setne' of changes the low 8 bits).
> But it is horrid for 32bit - added quite a few clocks.
> I'm not at all sure it is worth it though.
If you do an hardcoded +1 when it is not necessary, in most of those
cases you end up with one or two extra overflow adjustment loops. Those
loops are unnecessary when d_msig is not increased by 1.
> >
> > /*
> > * Now do a 'long division' with BITS_PER_ITER bit 'digits'.
> > @@ -308,12 +293,17 @@ u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)
> > * The worst case is dividing ~0 by 0x8000 which requires two subtracts.
> > */
> > quotient = 0;
> > - while (reps--) {
> > - q_digit = (unsigned long)(~n_hi >> (64 - 2 * BITS_PER_ITER)) / d_msig;
> > + for (reps = 64 / BITS_PER_ITER; reps; reps--) {
> > + quotient <<= BITS_PER_ITER;
> > + n_long = ~n_hi >> (64 - 2 * BITS_PER_ITER);
> > /* Shift 'n' left to align with the product q_digit * d */
> > overflow = n_hi >> (64 - BITS_PER_ITER);
> > n_hi = add_u64_u32(n_hi << BITS_PER_ITER, n_lo >> (64 - BITS_PER_ITER));
> > n_lo <<= BITS_PER_ITER;
> > + /* cut it short if q_digit would be 0 */
> > + if (n_long < d_msig)
> > + continue;
>
> As I said above that gets misprediced too often
>
> > + q_digit = n_long / d_msig;
>
> I fiddled with the order of the lines of code - changes the way the object
> code is laid out and change the clock count 'randomly' by one or two.
> Noise compared to mispredicted branches.
>
> I've not tested on an older system with a slower divide.
> I suspect older (Haswell and Broadwell) may benefit from the divide
> being scheduled earlier.
>
> If anyone wants of copy of the test program I can send you a copy.
>
> David
>
> > /* Add product to negated divisor */
> > overflow += mul_u64_long_add_u64(&n_hi, d, q_digit, n_hi);
> > /* Adjust for the q_digit 'guestimate' being low */
> > @@ -322,7 +312,7 @@ u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)
> > n_hi += d;
> > overflow += n_hi < d;
> > }
> > - quotient = add_u64_long(quotient << BITS_PER_ITER, q_digit);
> > + quotient = add_u64_long(quotient, q_digit);
> > }
> >
> > /*
>
>
Powered by blists - more mailing lists