[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7647q75q-6374-108o-0r1n-277rn7np7101@onlyvoer.pbz>
Date: Sat, 5 Apr 2025 23:06:35 -0400 (EDT)
From: Nicolas Pitre <npitre@...libre.com>
To: David Laight <david.laight.linux@...il.com>
cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
Uwe Kleine-König <u.kleine-koenig@...libre.com>,
Oleg Nesterov <oleg@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Biju Das <biju.das.jz@...renesas.com>
Subject: Re: [PATCH 1/3] lib: Add mul_u64_add_u64_div_u64() and
mul_u64_u64_div_u64_roundup()
On Sat, 5 Apr 2025, Nicolas Pitre wrote:
> On Sat, 5 Apr 2025, David Laight wrote:
>
> > The existing mul_u64_u64_div_u64() rounds down, a 'rounding up'
> > variant needs 'divisor - 1' adding in between the multiply and
> > divide so cannot easily be done by a caller.
> >
> > Add mul_u64_add_u64_div_u64(a, b, c, d) that calculates (a * b + c)/d
> > and implement the 'round down' and 'round up' using it.
> >
> > Update the x86-64 asm to optimise for 'c' being a constant zero.
> >
> > For architectures that support u128 check for a 64bit product after
> > the multiply (will be cheap).
> > Leave in the early check for other architectures (mostly 32bit) when
> > 'c' is zero to avoid the multi-part multiply.
> >
> > Note that the cost of the 128bit divide will dwarf the rest of the code.
> > This function is very slow on everything except x86-64 (very very slow
> > on 32bit).
> >
> > Add kerndoc definitions for all three functions.
> >
> > Signed-off-by: David Laight <david.laight.linux@...il.com>
>
> Reviewed-by: Nicolas Pitre <npitre@...libre.com>
>
> Sidenote: The 128-bits division cost is proportional to the number of
> bits in the final result. So if the result is 0x0080000000000000 then
> the loop will execute only once and exit early.
Just to clarify what I said: the 128-bits division cost is proportional
to the number of _set_ bits in the final result.
Nicolas
Powered by blists - more mailing lists