[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250618232648.0f58a27f@pumpkin>
Date: Wed, 18 Jun 2025 23:26:48 +0100
From: David Laight <david.laight.linux@...il.com>
To: Nicolas Pitre <nico@...xnic.net>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
u.kleine-koenig@...libre.com, Oleg Nesterov <oleg@...hat.com>, Peter
Zijlstra <peterz@...radead.org>, Biju Das <biju.das.jz@...renesas.com>
Subject: Re: [PATCH v3 next 09/10] lib: mul_u64_u64_div_u64() Optimise the
divide code
On Wed, 18 Jun 2025 16:12:49 -0400 (EDT)
Nicolas Pitre <nico@...xnic.net> wrote:
> On Wed, 18 Jun 2025, David Laight wrote:
>
> > On Wed, 18 Jun 2025 11:39:20 -0400 (EDT)
> > Nicolas Pitre <nico@...xnic.net> wrote:
> >
> > > > > + q_digit = n_long / d_msig;
> > > >
> > > > I think you want to do the divide right at the top - maybe even if the
> > > > result isn't used!
> > > > All the shifts then happen while the divide instruction is in progress
> > > > (even without out-of-order execution).
>
> Well.... testing on my old Intel Core i7-4770R doesn't show a gain.
>
> With your proposed patch as is: ~34ns per call
>
> With my proposed changes: ~31ns per call
>
> With my changes but leaving the divide at the top of the loop: ~32ns per call
Wonders what makes the difference...
Is that random 64bit values (where you don't expect zero digits)
or values where there are likely to be small divisors and/or zero digits?
On x86 you can use the PERF_COUNT_HW_CPU_CYCLES counter to get pretty accurate
counts for a single call.
The 'trick' is to use syscall(___NR_perf_event_open,...) and pc = mmap() to get
the register number pc->index - 1.
Then you want:
static inline unsigned int rdpmc(unsigned int counter)
{
unsigned int low, high;
asm volatile("rdpmc" : "=a" (low), "=d" (high) : "c" (counter));
return low;
}
and do:
unsigned int start = rdpmc(pc->index - 1);
unsigned int zero = 0;
OPTIMISER_HIDE_VAR(zero);
q = mul_u64_add_u64_div_u64(a + (start & zero), b, c, d);
elapsed = rdpmc(pc->index - 1 + (q & zero)) - start;
That carefully forces the rdpmc include the code being tested without
the massive penalty of lfence/mfence.
Do 10 calls and the last 8 will be pretty similar.
Lets you time cold-cache and branch mis-prediction effects.
> > Can you do accurate timings for arm64 or arm32?
>
> On a Broadcom BCM2712 (ARM Cortex-A76):
>
> With your proposed patch as is: ~20 ns per call
>
> With my proposed changes: ~19 ns per call
>
> With my changes but leaving the divide at the top of the loop: ~19 ns per call
Pretty much no difference.
Is that 64bit or 32bit (or the 16 bits per iteration on 64bit) ?
The shifts get more expensive on 32bit.
Have you timed the original code?
>
> Both CPUs have the same max CPU clock rate (2.4 GHz). These are obtained
> with clock_gettime(CLOCK_MONOTONIC) over 56000 calls. There is some
> noise in the results over multiple runs though but still.
That many loops definitely trains the branch predictor and ignores
any effects of loading the I-cache.
As Linus keeps saying, the kernel tends to be 'cold cache', code size
matters.
That also means that branches are 50% likely to be mis-predicted.
(Although working out what cpu actually do is hard.)
>
> I could get cycle measurements on the RPi5 but that requires a kernel
> recompile.
Or a loadable module - shame there isn't a sysctl.
>
> > I've found a 2004 Arm book that includes several I-cache busting
> > divide algorithms.
> > But I'm sure this pi-5 has hardware divide.
>
> It does.
>
>
> Nicolas
Powered by blists - more mailing lists