lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250520223700.2ec735fd@pumpkin>
Date: Tue, 20 May 2025 22:37:00 +0100
From: David Laight <david.laight.linux@...il.com>
To: Nicolas Pitre <npitre@...libre.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
 u.kleine-koenig@...libre.com, Oleg Nesterov <oleg@...hat.com>, Peter
 Zijlstra <peterz@...radead.org>, Biju Das <biju.das.jz@...renesas.com>
Subject: Re: [PATCH v2 next 3/4] lib: Add mul_u64_add_u64_div_u64() and
 mul_u64_u64_div_u64_roundup()

On Mon, 19 May 2025 23:03:21 -0400 (EDT)
Nicolas Pitre <npitre@...libre.com> wrote:

> On Sun, 18 May 2025, David Laight wrote:
> 
> > The existing mul_u64_u64_div_u64() rounds down, a 'rounding up'
> > variant needs 'divisor - 1' adding in between the multiply and
> > divide so cannot easily be done by a caller.
> > 
> > Add mul_u64_add_u64_div_u64(a, b, c, d) that calculates (a * b + c)/d
> > and implement the 'round down' and 'round up' using it.
> > 
> > Update the x86-64 asm to optimise for 'c' being a constant zero.
> > 
> > For architectures that support u128 check for a 64bit product after
> > the multiply (will be cheap).
> > Leave in the early check for other architectures (mostly 32bit) when
> > 'c' is zero to avoid the multi-part multiply.  
> 
> I agree with this, except for the "'c' is zero" part. More below.
> 
> > Note that the cost of the 128bit divide will dwarf the rest of the code.
> > This function is very slow on everything except x86-64 (very very slow
> > on 32bit).
> > 
> > Add kerndoc definitions for all three functions.
> > 
> > Signed-off-by: David Laight <david.laight.linux@...il.com>
> > ---
> > Changes for v2 (formally patch 1/3):
> > - Reinstate the early call to div64_u64() on 32bit when 'c' is zero.
> >   Although I'm not convinced the path is common enough to be worth
> >   the two ilog2() calls.
> > 
> >  arch/x86/include/asm/div64.h | 19 ++++++++++-----
> >  include/linux/math64.h       | 45 +++++++++++++++++++++++++++++++++++-
> >  lib/math/div64.c             | 21 ++++++++++-------
> >  3 files changed, 70 insertions(+), 15 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/div64.h b/arch/x86/include/asm/div64.h
> > index 9931e4c7d73f..7a0a916a2d7d 100644
> > --- a/arch/x86/include/asm/div64.h
> > +++ b/arch/x86/include/asm/div64.h
> > @@ -84,21 +84,28 @@ static inline u64 mul_u32_u32(u32 a, u32 b)
> >   * Will generate an #DE when the result doesn't fit u64, could fix with an
> >   * __ex_table[] entry when it becomes an issue.
> >   */
> > -static inline u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div)
> > +static inline u64 mul_u64_add_u64_div_u64(u64 a, u64 mul, u64 add, u64 div)
> >  {
> >  	u64 q;
> >  
> > -	asm ("mulq %2; divq %3" : "=a" (q)
> > -				: "a" (a), "rm" (mul), "rm" (div)
> > -				: "rdx");
> > +	if (statically_true(!add)) {
> > +		asm ("mulq %2; divq %3" : "=a" (q)
> > +					: "a" (a), "rm" (mul), "rm" (div)
> > +					: "rdx");
> > +	} else {
> > +		asm ("mulq %2; addq %3, %%rax; adcq $0, %%rdx; divq %4"
> > +			: "=a" (q)
> > +			: "a" (a), "rm" (mul), "rm" (add), "rm" (div)
> > +			: "rdx");
> > +	}
> >  
> >  	return q;
> >  }
> > -#define mul_u64_u64_div_u64 mul_u64_u64_div_u64
> > +#define mul_u64_add_u64_div_u64 mul_u64_add_u64_div_u64
> >  
> >  static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 div)
> >  {
> > -	return mul_u64_u64_div_u64(a, mul, div);
> > +	return mul_u64_add_u64_div_u64(a, mul, 0, div);
> >  }
> >  #define mul_u64_u32_div	mul_u64_u32_div
> >  
> > diff --git a/include/linux/math64.h b/include/linux/math64.h
> > index 6aaccc1626ab..e1c2e3642cec 100644
> > --- a/include/linux/math64.h
> > +++ b/include/linux/math64.h
> > @@ -282,7 +282,50 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
> >  }
> >  #endif /* mul_u64_u32_div */
> >  
> > -u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div);
> > +/**
> > + * mul_u64_add_u64_div_u64 - unsigned 64bit multiply, add, and divide
> > + * @a: first unsigned 64bit multiplicand
> > + * @b: second unsigned 64bit multiplicand
> > + * @c: unsigned 64bit addend
> > + * @d: unsigned 64bit divisor
> > + *
> > + * Multiply two 64bit values together to generate a 128bit product
> > + * add a third value and then divide by a fourth.
> > + * May BUG()/trap if @d is zero or the quotient exceeds 64 bits.
> > + *
> > + * Return: (@a * @b + @c) / @d
> > + */
> > +u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d);
> > +
> > +/**
> > + * mul_u64_u64_div_u64 - unsigned 64bit multiply and divide
> > + * @a: first unsigned 64bit multiplicand
> > + * @b: second unsigned 64bit multiplicand
> > + * @d: unsigned 64bit divisor
> > + *
> > + * Multiply two 64bit values together to generate a 128bit product
> > + * and then divide by a third value.
> > + * May BUG()/trap if @d is zero or the quotient exceeds 64 bits.  
> 
> If the quotient exceeds 64 bits, the optimized x86 version truncates the 
> value to the low 64 bits. The C version returns a saturated value i.e. 
> UINT64_MAX (implemented with a -1). Nothing actually traps in that case.

Nope. I've only got the iAPX 286 and 80386 reference manuals to hand.
Both say that 'interrupt 0' happens on overflow.
I don't expect the later documentation is any different.

If the kernel code is going to have an explicit instruction to trap
(rather then the code 'just trapping') it really is best to use BUG().
If nothing else it guarantees a trap regardless of the architecture
and compiler.

> 
> > + *
> > + * Return: @a * @b / @d
> > + */
> > +#define mul_u64_u64_div_u64(a, b, d) mul_u64_add_u64_div_u64(a, b, 0, d)
> > +
> > +/**
> > + * mul_u64_u64_div_u64_roundup - unsigned 64bit multiply and divide rounded up
> > + * @a: first unsigned 64bit multiplicand
> > + * @b: second unsigned 64bit multiplicand
> > + * @d: unsigned 64bit divisor
> > + *
> > + * Multiply two 64bit values together to generate a 128bit product
> > + * and then divide and round up.
> > + * May BUG()/trap if @d is zero or the quotient exceeds 64 bits.
> > + *
> > + * Return: (@a * @b + @d - 1) / @d
> > + */
> > +#define mul_u64_u64_div_u64_roundup(a, b, d) \
> > +	({ u64 _tmp = (d); mul_u64_add_u64_div_u64(a, b, _tmp - 1, _tmp); })
> > +
> >  
> >  /**
> >   * DIV64_U64_ROUND_UP - unsigned 64bit divide with 64bit divisor rounded up
> > diff --git a/lib/math/div64.c b/lib/math/div64.c
> > index c426fa0660bc..66bfb6159f02 100644
> > --- a/lib/math/div64.c
> > +++ b/lib/math/div64.c
> > @@ -183,29 +183,31 @@ u32 iter_div_u64_rem(u64 dividend, u32 divisor, u64 *remainder)
> >  }
> >  EXPORT_SYMBOL(iter_div_u64_rem);
> >  
> > -#ifndef mul_u64_u64_div_u64
> > -u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 d)
> > +#ifndef mul_u64_add_u64_div_u64
> > +u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)
> >  {
> >  	/* Trigger exception if divisor is zero */
> >  	BUG_ON(!d);
> >  
> > -	if (ilog2(a) + ilog2(b) <= 62)
> > -		return div64_u64(a * b, d);
> > -
> >  #if defined(__SIZEOF_INT128__)
> >  
> >  	/* native 64x64=128 bits multiplication */
> > -	u128 prod = (u128)a * b;
> > +	u128 prod = (u128)a * b + c;
> >  	u64 n_lo = prod, n_hi = prod >> 64;
> >  
> >  #else
> >  
> > +	if (!c && ilog2(a) + ilog2(b) <= 62)
> > +		return div64_u64(a * b, d);
> > +  
> 
> Here you should do:
> 
> 	if (ilog2(a) + ilog2(b) <= 62) {
> 		u64 ab = a * b;
> 		u64 abc = ab + c;
> 		if (ab <= abc)
> 			return div64_u64(abc, d);
> 	}
> 
> This is cheap and won't unconditionally discard the faster path when c != 0;

That isn't really cheap.
ilog2() is likely to be a similar cost to a multiply
(my brain remembers them both as 'latency 3' on x86).
My actual preference is to completely delete that test and rely
on the post-multiply check.

The 64 by 64 multiply code is actually fairly cheap.
On x86-64 it is only a few clocks slower than the u128 version
(and that is (much) the same code that should be generated for 32bit).

> 
> >  	/* perform a 64x64=128 bits multiplication manually */
> >  	u32 a_lo = a, a_hi = a >> 32, b_lo = b, b_hi = b >> 32;
> >  	u64 x, y, z;
> >  
> > -	x = (u64)a_lo * b_lo;
> > +	/* Since (x-1)(x-1) + 2(x-1) == x.x - 1 two u32 can be added to a u64 */
> > +	x = (u64)a_lo * b_lo + (u32)c;
> >  	y = (u64)a_lo * b_hi + (u32)(x >> 32);
> > +	y += (u32)(c >> 32);

Those two adds to y should be swapped - I need to do a v3 and will swap them.
It might save one clock - my timing code is accurate, but not THAT accurate.

> >  	z = (u64)a_hi * b_hi + (u32)(y >> 32);
> >  	y = (u64)a_hi * b_lo + (u32)y;
> >  	z += (u32)(y >> 32);
> > @@ -215,6 +217,9 @@ u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 d)

If we assume the compiler is sane (gcc isn't), a/b_hi/lo are in registers
and mul has a latency of 3 (and add 1) the code above can execute as:
clock 0: x_h/x_lo = a_lo * b_lo
clock 1: y_h/y_lo = a_lo * b_hi
clock 2: y1_ho/y1_lo = a_hi * b_lo
clock 3: z_hi/z_lo = a_hi + b_hi; x_lo += c_lo
clock 4: x_hi += carry; y_lo += c_hi
clock 5: y_hi += carry; y_lo += x_hi
clock 6: y_hi += carry; y1_lo += y_lo
clock 7: y1_hi += carry; z_lo += y_hi
clock 8: z_hi += carry; z_lo += y1_hi
clock 9: z_hi += carry
I don't think any more instructions can run in parallel.
But it really isn't that long an all.
Your 'fast path' test will be nearly that long even ignoring
mis-predicted branches.

For my updated version I've managed to stop gcc spilling zero words
to stack!

	David

> >  
> >  #endif
> >  
> > +	if (!n_hi)
> > +		return div64_u64(n_lo, d);
> > +
> >  	int shift = __builtin_ctzll(d);
> >  
> >  	/* try reducing the fraction in case the dividend becomes <= 64 bits */
> > @@ -261,5 +266,5 @@ u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 d)
> >  
> >  	return res;
> >  }
> > -EXPORT_SYMBOL(mul_u64_u64_div_u64);
> > +EXPORT_SYMBOL(mul_u64_add_u64_div_u64);
> >  #endif
> > -- 
> > 2.39.5
> > 
> >   


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ