[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <85440s3q-n8po-o604-4877-514sq5q3o034@syhkavp.arg>
Date: Tue, 17 Jun 2025 21:39:23 -0400 (EDT)
From: Nicolas Pitre <nico@...xnic.net>
To: David Laight <david.laight.linux@...il.com>
cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
u.kleine-koenig@...libre.com, Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Biju Das <biju.das.jz@...renesas.com>
Subject: Re: [PATCH v3 next 06/10] lib: test_mul_u64_u64_div_u64: Test both
generic and arch versions
On Sat, 14 Jun 2025, Nicolas Pitre wrote:
> On Sat, 14 Jun 2025, David Laight wrote:
>
> > Change the #if in div64.c so that test_mul_u64_u64_div_u64.c
> > can compile and test the generic version (including the 'long multiply')
> > on architectures (eg amd64) that define their own copy.
> > Test the kernel version and the locally compiled version on all arch.
> > Output the time taken (in ns) on the 'test completed' trace.
> >
> > For reference, on my zen 5, the optimised version takes ~220ns and the
> > generic version ~3350ns.
> > Using the native multiply saves ~200ns and adding back the ilog2() 'optimisation'
> > test adds ~50ms.
> >
> > Signed-off-by: David Laight <david.laight.linux@...il.com>
>
> Reviewed-by: Nicolas Pitre <npitre@...libre.com>
In fact this doesn't compile on ARM32. The following is needed to fix that:
commit 271a7224634699721b6383ba28f37b23f901319e
Author: Nicolas Pitre <nico@...xnic.net>
Date: Tue Jun 17 17:14:05 2025 -0400
fixup! lib: test_mul_u64_u64_div_u64: Test both generic and arch versions
diff --git a/lib/math/test_mul_u64_u64_div_u64.c b/lib/math/test_mul_u64_u64_div_u64.c
index 88316e68512c..44df9aa39406 100644
--- a/lib/math/test_mul_u64_u64_div_u64.c
+++ b/lib/math/test_mul_u64_u64_div_u64.c
@@ -153,7 +153,10 @@ static void __exit test_exit(void)
}
/* Compile the generic mul_u64_add_u64_div_u64() code */
+#define __div64_32 __div64_32
+#define div_s64_rem div_s64_rem
#define div64_u64 div64_u64
+#define div64_u64_rem div64_u64_rem
#define div64_s64 div64_s64
#define iter_div_u64_rem iter_div_u64_rem
Powered by blists - more mailing lists