[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251031091918.643b0868@pumpkin>
Date: Fri, 31 Oct 2025 09:19:18 +0000
From: David Laight <david.laight.linux@...il.com>
To: Nicolas Pitre <npitre@...libre.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
u.kleine-koenig@...libre.com, Oleg Nesterov <oleg@...hat.com>, Peter
Zijlstra <peterz@...radead.org>, Biju Das <biju.das.jz@...renesas.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>, Thomas
Gleixner <tglx@...utronix.de>, Li RongQing <lirongqing@...du.com>, Yu Kuai
<yukuai3@...wei.com>, Khazhismel Kumykov <khazhy@...omium.org>, Jens Axboe
<axboe@...nel.dk>, x86@...nel.org
Subject: Re: [PATCH v4 next 3/9] lib: mul_u64_u64_div_u64() simplify check
for a 64bit product
On Wed, 29 Oct 2025 14:11:08 -0400 (EDT)
Nicolas Pitre <npitre@...libre.com> wrote:
> On Wed, 29 Oct 2025, David Laight wrote:
>
> > If the product is only 64bits div64_u64() can be used for the divide.
> > Replace the pre-multiply check (ilog2(a) + ilog2(b) <= 62) with a
> > simple post-multiply check that the high 64bits are zero.
> >
> > This has the advantage of being simpler, more accurate and less code.
> > It will always be faster when the product is larger than 64bits.
> >
> > Most 64bit cpu have a native 64x64=128 bit multiply, this is needed
> > (for the low 64bits) even when div64_u64() is called - so the early
> > check gains nothing and is just extra code.
> >
> > 32bit cpu will need a compare (etc) to generate the 64bit ilog2()
> > from two 32bit bit scans - so that is non-trivial.
> > (Never mind the mess of x86's 'bsr' and any oddball cpu without
> > fast bit-scan instructions.)
> > Whereas the additional instructions for the 128bit multiply result
> > are pretty much one multiply and two adds (typically the 'adc $0,%reg'
> > can be run in parallel with the instruction that follows).
> >
> > The only outliers are 64bit systems without 128bit mutiply and
> > simple in order 32bit ones with fast bit scan but needing extra
> > instructions to get the high bits of the multiply result.
> > I doubt it makes much difference to either, the latter is definitely
> > not mainstream.
> >
> > If anyone is worried about the analysis they can look at the
> > generated code for x86 (especially when cmov isn't used).
> >
> > Signed-off-by: David Laight <david.laight.linux@...il.com>
>
> Comment below.
>
>
> > ---
> >
> > Split from patch 3 for v2, unchanged since.
> >
> > lib/math/div64.c | 6 +++---
> > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/lib/math/div64.c b/lib/math/div64.c
> > index 1092f41e878e..7158d141b6e9 100644
> > --- a/lib/math/div64.c
> > +++ b/lib/math/div64.c
> > @@ -186,9 +186,6 @@ EXPORT_SYMBOL(iter_div_u64_rem);
> > #ifndef mul_u64_u64_div_u64
> > u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 d)
> > {
> > - if (ilog2(a) + ilog2(b) <= 62)
> > - return div64_u64(a * b, d);
> > -
> > #if defined(__SIZEOF_INT128__)
> >
> > /* native 64x64=128 bits multiplication */
> > @@ -224,6 +221,9 @@ u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 d)
> > return ~0ULL;
> > }
> >
> > + if (!n_hi)
> > + return div64_u64(n_lo, d);
>
> I'd move this before the overflow test. If this is to be taken then
> you'll save one test. same cost otherwise.
>
I wanted the 'divide by zero' result to be consistent.
Additionally the change to stop the x86-64 version panicking on
overflow also makes it return ~0 for divide by zero.
If that is done then this version needs to be consistent and
return ~0 for divide by zero - which div64_u64() won't do.
It is worth remembering that the chance of (a * b + c)/d being ~0
is pretty small (for non-test inputs), and any code that might expect
such a value is likely to have to handle overflow as well.
(Not to mention avoiding overflow of 'a' and 'b'.)
So using ~0 for overflow isn't really a problem.
David
Powered by blists - more mailing lists