[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <r6541sps-1nrq-o4ro-5688-0p7n2ps0o315@onlyvoer.pbz>
Date: Tue, 4 Nov 2025 12:16:00 -0500 (EST)
From: Nicolas Pitre <npitre@...libre.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: David Laight <david.laight.linux@...il.com>, linux-kernel@...r.kernel.org,
u.kleine-koenig@...libre.com, Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Biju Das <biju.das.jz@...renesas.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>, Li RongQing <lirongqing@...du.com>,
Yu Kuai <yukuai3@...wei.com>, Khazhismel Kumykov <khazhy@...omium.org>,
Jens Axboe <axboe@...nel.dk>, x86@...nel.org
Subject: Re: [PATCH v4 next 0/9] Implement mul_u64_u64_div_u64_roundup()
On Thu, 30 Oct 2025, Andrew Morton wrote:
> Thanks, I added this to mm.git's mm-nonmm-unstable branch for some
> linux-next exposure. I have a note that [3/9] may be updated in
> response to Nicolas's comment.
This is the change I'd like to see:
----- >8
FRom: Nicolas Pitre <npitre@...libre.com>
Subject: lib: mul_u64_u64_div_u64(): optimize quick path for small numbers
If the 128-bit product is small enough (n_hi == 0) we should branch to
div64_u64() right away. This saves one test for this quick path which is
more prevalent than divide-by-0 cases and div64_u64() can deal with the
(theoretically undefined behavior) zero divisor just fine too. The cost
remains the same for regular cases.
Signed-off-by: Nicolas Pitre <npitre@...libre.com>
---
diff --git a/lib/math/div64.c b/lib/math/div64.c
index 4e4e962261c3..d1e92ea24fce 100644
--- a/lib/math/div64.c
+++ b/lib/math/div64.c
@@ -247,6 +247,9 @@ u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)
n_hi = mul_u64_u64_add_u64(&n_lo, a, b, c);
+ if (!n_hi)
+ return div64_u64(n_lo, d);
+
if (unlikely(n_hi >= d)) {
/* trigger runtime exception if divisor is zero */
if (d == 0) {
@@ -259,9 +262,6 @@ u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)
return ~0ULL;
}
- if (!n_hi)
- return div64_u64(n_lo, d);
-
/* Left align the divisor, shifting the dividend to match */
d_z_hi = __builtin_clzll(d);
if (d_z_hi) {
Powered by blists - more mailing lists