[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250406133024.0d422c82@pumpkin>
Date: Sun, 6 Apr 2025 13:30:24 +0100
From: David Laight <david.laight.linux@...il.com>
To: Nicolas Pitre <npitre@...libre.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
Uwe Kleine-König <u.kleine-koenig@...libre.com>, Oleg
Nesterov <oleg@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Biju Das
<biju.das.jz@...renesas.com>
Subject: Re: [PATCH 1/3] lib: Add mul_u64_add_u64_div_u64() and
mul_u64_u64_div_u64_roundup()
On Sun, 6 Apr 2025 10:35:16 +0100
David Laight <david.laight.linux@...il.com> wrote:
> On Sat, 5 Apr 2025 21:46:25 -0400 (EDT)
> Nicolas Pitre <npitre@...libre.com> wrote:
>
> > On Sat, 5 Apr 2025, David Laight wrote:
> >
> > > The existing mul_u64_u64_div_u64() rounds down, a 'rounding up'
> > > variant needs 'divisor - 1' adding in between the multiply and
> > > divide so cannot easily be done by a caller.
> > >
> > > Add mul_u64_add_u64_div_u64(a, b, c, d) that calculates (a * b + c)/d
> > > and implement the 'round down' and 'round up' using it.
> > >
> > > Update the x86-64 asm to optimise for 'c' being a constant zero.
> > >
> > > For architectures that support u128 check for a 64bit product after
> > > the multiply (will be cheap).
> > > Leave in the early check for other architectures (mostly 32bit) when
> > > 'c' is zero to avoid the multi-part multiply.
> > >
> > > Note that the cost of the 128bit divide will dwarf the rest of the code.
> > > This function is very slow on everything except x86-64 (very very slow
> > > on 32bit).
> > >
> > > Add kerndoc definitions for all three functions.
> > >
> > > Signed-off-by: David Laight <david.laight.linux@...il.com>
> >
> > Reviewed-by: Nicolas Pitre <npitre@...libre.com>
> >
> > Sidenote: The 128-bits division cost is proportional to the number of
> > bits in the final result. So if the result is 0x0080000000000000 then
> > the loop will execute only once and exit early.
>
> Some performance measurements for the test cases:
> 0: ok 50 25 19 19 19 19 19 19 19 19 mul_u64_u64_div_u64
> 1: ok 2 0 0 0 0 0 0 0 0 0 mul_u64_u64_div_u64
> 2: ok 4 4 4 4 4 4 4 4 4 4 mul_u64_u64_div_u64
> 3: ok 4 4 4 4 4 4 4 4 4 4 mul_u64_u64_div_u64
> 4: ok 4 4 4 4 4 4 4 4 4 4 mul_u64_u64_div_u64
> 5: ok 15 8 4 4 4 4 4 4 4 4 mul_u64_u64_div_u64
> 6: ok 275 225 223 223 223 223 223 224 224 223 mul_u64_u64_div_u64
> 7: ok 9 6 4 4 4 4 5 5 4 4 mul_u64_u64_div_u64
> 8: ok 241 192 187 187 187 187 187 188 187 187 mul_u64_u64_div_u64
> 9: ok 212 172 169 169 169 169 169 169 169 169 mul_u64_u64_div_u64
> 10: ok 12 6 4 4 4 4 4 4 4 4 mul_u64_u64_div_u64
> 11: ok 9 5 4 4 4 4 4 4 4 4 mul_u64_u64_div_u64
> 12: ok 6 4 4 4 4 4 4 4 4 4 mul_u64_u64_div_u64
> 13: ok 6 5 5 4 4 4 4 4 4 4 mul_u64_u64_div_u64
> 14: ok 4 4 5 5 4 4 4 4 4 5 mul_u64_u64_div_u64
> 15: ok 18 12 8 8 8 8 8 8 8 8 mul_u64_u64_div_u64
> 16: ok 18 11 6 6 6 6 6 6 6 6 mul_u64_u64_div_u64
> 17: ok 22 16 11 11 11 11 11 11 11 11 mul_u64_u64_div_u64
> 18: ok 25 18 9 9 9 9 9 10 9 10 mul_u64_u64_div_u64
> 19: ok 272 231 227 227 226 227 227 227 227 226 mul_u64_u64_div_u64
> 20: ok 198 188 185 185 185 186 185 185 186 186 mul_u64_u64_div_u64
> 21: ok 207 198 196 196 196 196 196 196 196 196 mul_u64_u64_div_u64
> 22: ok 201 189 190 189 190 189 190 189 190 189 mul_u64_u64_div_u64
> 23: ok 224 184 181 181 181 181 180 180 181 181 mul_u64_u64_div_u64
> 24: ok 238 185 179 179 179 179 179 179 179 179 mul_u64_u64_div_u64
> 25: ok 208 178 177 177 177 177 177 177 177 177 mul_u64_u64_div_u64
> 26: ok 170 146 139 139 139 139 139 139 139 139 mul_u64_u64_div_u64
> 27: ok 256 204 196 196 196 196 196 196 196 196 mul_u64_u64_div_u64
> 28: ok 227 195 194 195 194 195 194 195 194 195 mul_u64_u64_div_u64
>
> Entry 0 is an extra test and is the test overhead - subtracted from the others.
Except I'd failed to add the 'if (!a) return' so it was doing all the multiples.
> Each column is clocks for one run of the test, but for this set I'm running
> the actual test 16 times and later dividing the clock count by 16.
> It doesn't make much difference though, so the cost of the
> mfence; rdpmc; mfence; nop_test; mfence; rdpmc; mfence
> really is about 190 clocks (between the rdpmc results).
>
> As soon as you hit a non-trival case the number of clocks increases
> dramatically.
>
> This is on a zen5 in 64bit mode ignoring the u128 path.
> (I don't have the packages installed to compile a 32bit binary).
I had a brain wave.
If I make the mul_div depend on the result of the first rdpmc and make the
second rdpmc depend on the mul_div all the mfence can be removed.
(easily done by + (value & volatile_zero)).
(I need to do the same 'trick' for my 'rep movsb' measurements.)
I then get (for 10 single calls of each mul_div):
0: ok 109 62 27 26 26 26 26 26 26 26 mul_u64_u64_div_u64
1: ok 100 48 47 26 25 25 25 25 25 25 mul_u64_u64_div_u64
2: ok 306 31 31 31 31 31 31 31 31 31 mul_u64_u64_div_u64
3: ok 32 32 32 32 32 32 32 32 32 32 mul_u64_u64_div_u64
4: ok 329 31 31 31 31 31 31 31 31 31 mul_u64_u64_div_u64
5: ok 108 61 59 34 34 34 34 34 34 34 mul_u64_u64_div_u64
6: ok 892 462 290 245 243 243 243 243 243 243 mul_u64_u64_div_u64
7: ok 95 80 34 34 34 34 34 34 34 34 mul_u64_u64_div_u64
8: ok 598 500 217 218 216 214 213 218 216 218 mul_u64_u64_div_u64
9: ok 532 461 228 186 187 189 189 189 189 189 mul_u64_u64_div_u64
10: ok 57 53 31 31 31 31 31 31 31 31 mul_u64_u64_div_u64
11: ok 71 41 41 27 27 27 27 27 27 27 mul_u64_u64_div_u64
12: ok 54 27 27 27 27 27 27 27 27 27 mul_u64_u64_div_u64
13: ok 60 34 34 34 34 34 34 34 34 34 mul_u64_u64_div_u64
14: ok 34 34 34 34 34 34 34 34 34 34 mul_u64_u64_div_u64
15: ok 74 94 24 24 24 24 24 24 24 24 mul_u64_u64_div_u64
16: ok 124 46 45 20 20 20 20 20 20 20 mul_u64_u64_div_u64
17: ok 140 79 26 24 25 24 25 24 25 24 mul_u64_u64_div_u64
18: ok 144 106 24 23 23 23 23 23 23 23 mul_u64_u64_div_u64
19: ok 569 357 204 203 204 203 204 203 204 203 mul_u64_u64_div_u64
20: ok 263 279 240 208 208 208 208 208 208 208 mul_u64_u64_div_u64
21: ok 351 580 397 216 215 215 215 215 215 215 mul_u64_u64_div_u64
22: ok 293 267 267 208 209 207 208 207 208 207 mul_u64_u64_div_u64
23: ok 536 319 236 236 236 236 236 236 236 236 mul_u64_u64_div_u64
24: ok 984 334 194 194 197 193 193 193 193 193 mul_u64_u64_div_u64
25: ok 630 360 195 199 199 199 199 199 199 199 mul_u64_u64_div_u64
26: ok 558 239 150 149 148 151 149 151 149 151 mul_u64_u64_div_u64
27: ok 997 414 215 219 214 219 214 219 214 219 mul_u64_u64_div_u64
28: ok 679 398 216 216 213 215 217 216 216 213 mul_u64_u64_div_u64
which now includes the cost of the divide when the product is 64bit.
The first few results on each row are affected by things like cache reads and
branch prediction.
But the later ones are pretty stable.
David
Powered by blists - more mailing lists