[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250619093259.5e82982c@pumpkin>
Date: Thu, 19 Jun 2025 09:32:59 +0100
From: David Laight <david.laight.linux@...il.com>
To: Nicolas Pitre <nico@...xnic.net>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
u.kleine-koenig@...libre.com, Oleg Nesterov <oleg@...hat.com>, Peter
Zijlstra <peterz@...radead.org>, Biju Das <biju.das.jz@...renesas.com>
Subject: Re: [PATCH v3 next 09/10] lib: mul_u64_u64_div_u64() Optimise the
divide code
On Wed, 18 Jun 2025 22:43:47 -0400 (EDT)
Nicolas Pitre <nico@...xnic.net> wrote:
> On Wed, 18 Jun 2025, David Laight wrote:
>
> > On Wed, 18 Jun 2025 16:12:49 -0400 (EDT)
> > Nicolas Pitre <nico@...xnic.net> wrote:
> >
> > > On Wed, 18 Jun 2025, David Laight wrote:
> > >
> > > > On Wed, 18 Jun 2025 11:39:20 -0400 (EDT)
> > > > Nicolas Pitre <nico@...xnic.net> wrote:
> > > >
> > > > > > > + q_digit = n_long / d_msig;
> > > > > >
> > > > > > I think you want to do the divide right at the top - maybe even if the
> > > > > > result isn't used!
> > > > > > All the shifts then happen while the divide instruction is in progress
> > > > > > (even without out-of-order execution).
> > >
> > > Well.... testing on my old Intel Core i7-4770R doesn't show a gain.
> > >
> > > With your proposed patch as is: ~34ns per call
> > >
> > > With my proposed changes: ~31ns per call
> > >
> > > With my changes but leaving the divide at the top of the loop: ~32ns per call
> >
> > Wonders what makes the difference...
> > Is that random 64bit values (where you don't expect zero digits)
> > or values where there are likely to be small divisors and/or zero digits?
>
> Those are values from the test module. I just copied it into a user
> space program.
Ah, those tests are heavily biased towards values with all bits set.
I added the 'pre-loop' check to speed up the few that have leading zeros
(and don't escape into the div_u64() path).
I've been timing the divisions separately.
I will look at whether it is worth just checking for the top 32bits
being zero on 32bit - where the conditional code is just register moves.
...
> My proposed changes shrink the code especially on 32-bit systems due to
> the pre-loop special cases removal.
>
> > That also means that branches are 50% likely to be mis-predicted.
>
> We can tag it as unlikely. In practice this isn't taken very often.
I suspect 'unlikely' is over-rated :-)
I had 'fun and games' a few years back trying to minimise the worst-case
path for some code running on a simple embedded cpu.
Firstly gcc seems to ignore 'unlikely' unless there is code in the 'likely'
path - an asm comment will do nicely.
The there is is cpu itself, the x86 prefetch logic is likely to assume
non-taken (probably actually not-branch), but the prediction logic itself
uses whatever is in the selected logic (effectively an array - assume hashed)
so if it isn't 'trained' on the code being execute is 50% taken.
(Only the P6 (late 90's) had prefix for unlikely/likely.)
>
> > (Although working out what cpu actually do is hard.)
> >
> > >
> > > I could get cycle measurements on the RPi5 but that requires a kernel
> > > recompile.
> >
> > Or a loadable module - shame there isn't a sysctl.
>
> Not sure. I've not investigated how the RPi kernel is configured yet.
> I suspect this is about granting user space direct access to PMU regs.
Something like that - you don't get the TSC by default.
Access is denied to (try to) stop timing attacks - but doesn't really
help. Just makes it all too hard for everyone else.
I'm not sure it also stops all the 'time' functions being implemented
in the vdso without a system call - and that hurts performance.
David
>
>
> Nicolas
Powered by blists - more mailing lists