lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <148nop5q-s958-n0q4-66r8-o91ns4pnr4on@onlyvoer.pbz>
Date: Tue, 20 May 2025 18:24:58 -0400 (EDT)
From: Nicolas Pitre <npitre@...libre.com>
To: David Laight <david.laight.linux@...il.com>
cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org, 
    u.kleine-koenig@...libre.com, Oleg Nesterov <oleg@...hat.com>, 
    Peter Zijlstra <peterz@...radead.org>, 
    Biju Das <biju.das.jz@...renesas.com>
Subject: Re: [PATCH v2 next 3/4] lib: Add mul_u64_add_u64_div_u64() and
 mul_u64_u64_div_u64_roundup()

On Tue, 20 May 2025, David Laight wrote:

> On Mon, 19 May 2025 23:03:21 -0400 (EDT)
> Nicolas Pitre <npitre@...libre.com> wrote:
> 
> > On Sun, 18 May 2025, David Laight wrote:
> > 
> > > The existing mul_u64_u64_div_u64() rounds down, a 'rounding up'
> > > variant needs 'divisor - 1' adding in between the multiply and
> > > divide so cannot easily be done by a caller.
> > > 
> > > Add mul_u64_add_u64_div_u64(a, b, c, d) that calculates (a * b + c)/d
> > > and implement the 'round down' and 'round up' using it.
> > > 
> > > Update the x86-64 asm to optimise for 'c' being a constant zero.
> > > 
> > > For architectures that support u128 check for a 64bit product after
> > > the multiply (will be cheap).
> > > Leave in the early check for other architectures (mostly 32bit) when
> > > 'c' is zero to avoid the multi-part multiply.  
> > 
> > I agree with this, except for the "'c' is zero" part. More below.
> > 
> > > Note that the cost of the 128bit divide will dwarf the rest of the code.
> > > This function is very slow on everything except x86-64 (very very slow
> > > on 32bit).
> > > 
> > > Add kerndoc definitions for all three functions.
> > > 
> > > Signed-off-by: David Laight <david.laight.linux@...il.com>
> > > ---
> > > Changes for v2 (formally patch 1/3):
> > > - Reinstate the early call to div64_u64() on 32bit when 'c' is zero.
> > >   Although I'm not convinced the path is common enough to be worth
> > >   the two ilog2() calls.
> > > 
> > >  arch/x86/include/asm/div64.h | 19 ++++++++++-----
> > >  include/linux/math64.h       | 45 +++++++++++++++++++++++++++++++++++-
> > >  lib/math/div64.c             | 21 ++++++++++-------
> > >  3 files changed, 70 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/arch/x86/include/asm/div64.h b/arch/x86/include/asm/div64.h
> > > index 9931e4c7d73f..7a0a916a2d7d 100644
> > > --- a/arch/x86/include/asm/div64.h
> > > +++ b/arch/x86/include/asm/div64.h
> > > @@ -84,21 +84,28 @@ static inline u64 mul_u32_u32(u32 a, u32 b)
> > >   * Will generate an #DE when the result doesn't fit u64, could fix with an
> > >   * __ex_table[] entry when it becomes an issue.
> > >   */
> > > -static inline u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div)
> > > +static inline u64 mul_u64_add_u64_div_u64(u64 a, u64 mul, u64 add, u64 div)
> > >  {
> > >  	u64 q;
> > >  
> > > -	asm ("mulq %2; divq %3" : "=a" (q)
> > > -				: "a" (a), "rm" (mul), "rm" (div)
> > > -				: "rdx");
> > > +	if (statically_true(!add)) {
> > > +		asm ("mulq %2; divq %3" : "=a" (q)
> > > +					: "a" (a), "rm" (mul), "rm" (div)
> > > +					: "rdx");
> > > +	} else {
> > > +		asm ("mulq %2; addq %3, %%rax; adcq $0, %%rdx; divq %4"
> > > +			: "=a" (q)
> > > +			: "a" (a), "rm" (mul), "rm" (add), "rm" (div)
> > > +			: "rdx");
> > > +	}
> > >  
> > >  	return q;
> > >  }
> > > -#define mul_u64_u64_div_u64 mul_u64_u64_div_u64
> > > +#define mul_u64_add_u64_div_u64 mul_u64_add_u64_div_u64
> > >  
> > >  static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 div)
> > >  {
> > > -	return mul_u64_u64_div_u64(a, mul, div);
> > > +	return mul_u64_add_u64_div_u64(a, mul, 0, div);
> > >  }
> > >  #define mul_u64_u32_div	mul_u64_u32_div
> > >  
> > > diff --git a/include/linux/math64.h b/include/linux/math64.h
> > > index 6aaccc1626ab..e1c2e3642cec 100644
> > > --- a/include/linux/math64.h
> > > +++ b/include/linux/math64.h
> > > @@ -282,7 +282,50 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
> > >  }
> > >  #endif /* mul_u64_u32_div */
> > >  
> > > -u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div);
> > > +/**
> > > + * mul_u64_add_u64_div_u64 - unsigned 64bit multiply, add, and divide
> > > + * @a: first unsigned 64bit multiplicand
> > > + * @b: second unsigned 64bit multiplicand
> > > + * @c: unsigned 64bit addend
> > > + * @d: unsigned 64bit divisor
> > > + *
> > > + * Multiply two 64bit values together to generate a 128bit product
> > > + * add a third value and then divide by a fourth.
> > > + * May BUG()/trap if @d is zero or the quotient exceeds 64 bits.
> > > + *
> > > + * Return: (@a * @b + @c) / @d
> > > + */
> > > +u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d);
> > > +
> > > +/**
> > > + * mul_u64_u64_div_u64 - unsigned 64bit multiply and divide
> > > + * @a: first unsigned 64bit multiplicand
> > > + * @b: second unsigned 64bit multiplicand
> > > + * @d: unsigned 64bit divisor
> > > + *
> > > + * Multiply two 64bit values together to generate a 128bit product
> > > + * and then divide by a third value.
> > > + * May BUG()/trap if @d is zero or the quotient exceeds 64 bits.  
> > 
> > If the quotient exceeds 64 bits, the optimized x86 version truncates the 
> > value to the low 64 bits. The C version returns a saturated value i.e. 
> > UINT64_MAX (implemented with a -1). Nothing actually traps in that case.
> 
> Nope. I've only got the iAPX 286 and 80386 reference manuals to hand.
> Both say that 'interrupt 0' happens on overflow.
> I don't expect the later documentation is any different.

Hmmm... OK, you're right. I must have botched my test code initially.

> If the kernel code is going to have an explicit instruction to trap
> (rather then the code 'just trapping') it really is best to use BUG().
> If nothing else it guarantees a trap regardless of the architecture
> and compiler.

OK in the overflow case.

However in the div-by_0 case it is best if, for a given architecture, 
the behavior is coherent across all division operations.

> > > +	if (!c && ilog2(a) + ilog2(b) <= 62)
> > > +		return div64_u64(a * b, d);
> > > +  
> > 
> > Here you should do:
> > 
> > 	if (ilog2(a) + ilog2(b) <= 62) {
> > 		u64 ab = a * b;
> > 		u64 abc = ab + c;
> > 		if (ab <= abc)
> > 			return div64_u64(abc, d);
> > 	}
> > 
> > This is cheap and won't unconditionally discard the faster path when c != 0;
> 
> That isn't really cheap.
> ilog2() is likely to be a similar cost to a multiply
> (my brain remembers them both as 'latency 3' on x86).

I'm not discussing the ilog2() usage though. I'm just against limiting 
the test to !c. My suggestion is about supporting all values of c.

> My actual preference is to completely delete that test and rely
> on the post-multiply check.
> 
> The 64 by 64 multiply code is actually fairly cheap.
> On x86-64 it is only a few clocks slower than the u128 version
> (and that is (much) the same code that should be generated for 32bit).

Of course x86-64 is not the primary target here as it has its own 
optimized version.


Nicolas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ