lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250518133848.5811-1-david.laight.linux@gmail.com>
Date: Sun, 18 May 2025 14:38:44 +0100
From: David Laight <david.laight.linux@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Cc: David Laight <david.laight.linux@...il.com>,
	u.kleine-koenig@...libre.com,
	Nicolas Pitre <npitre@...libre.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Biju Das <biju.das.jz@...renesas.com>
Subject: [PATCH v2 next 0/4] Implement mul_u64_u64_div_u64_roundup()

The pwm-stm32.c code wants a 'rounding up' version of mul_u64_u64_div_u64().
This can be done simply by adding 'divisor - 1' to the 128bit product.
Implement mul_u64_add_u64_div_u64(a, b, c, d) = (a * b + c)/d based on the
existing code.
Define mul_u64_u64_div_u64(a, b, d) as mul_u64_add_u64_div_u64(a, b, 0, d) and
mul_u64_u64_div_u64_roundup(a, b, d) as mul_u64_add_u64_div_u64(a, b, d-1, d).

Only x86-64 has an optimsed (asm) version of the function.
That is optimised to avoid the 'add c' when c is known to be zero.
In all other cases the extra code will be noise compared to the software
divide code.

I've updated the test module to test mul_u64_u64_div_u64_roundup() and
also enhanced it to verify the C division code on x86-64.

Changes for v2:
- Rename the 'divisor' parameter from 'c' to 'd'.
- Add an extra patch to use BUG_ON() to trap zero divisors.
- Remove the last patch that ran the C code on x86-64
  (I've a plan to do that differently).

Note that this code is slow, in userspace on a zen-5 220-250 clocks
in 64bit mode and 450-900 clocks in 32bit mode.
(Ignoring the fast path cases.)
Not helped by gcc making a 'pigs breakfast' of mixed 32/64 bit maths
(clang is a lot better).
But helped by the x86 sh[rl]d and cmov (enabled for my 32bit builds).

And I'm not at all sure the call in kernel/sched/cputime.c isn't in a
relatively common path (rather than just hardware initialisation).

I've a followup patch that reduces the clock counts to about 80 in
64bit mode and 130 in 32bit mode (pretty much data independant).

David Laight (4):
  lib: mul_u64_u64_div_u64() rename parameter 'c' to 'd'
  lib: mul_u64_u64_div_u64() Use BUG_ON() for divide by zero
  lib: Add mul_u64_add_u64_div_u64() and mul_u64_u64_div_u64_roundup()
  lib: Add tests for mul_u64_u64_div_u64_roundup()

 arch/x86/include/asm/div64.h        |  19 +++--
 include/linux/math64.h              |  45 ++++++++++-
 lib/math/div64.c                    |  43 ++++++-----
 lib/math/test_mul_u64_u64_div_u64.c | 116 +++++++++++++++++-----------
 4 files changed, 149 insertions(+), 74 deletions(-)

-- 
2.39.5


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ