[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250226222911.22cb0c18@pumpkin>
Date: Wed, 26 Feb 2025 22:29:11 +0000
From: David Laight <david.laight.linux@...il.com>
To: Yury Norov <yury.norov@...il.com>
Cc: Kuan-Wei Chiu <visitorckw@...il.com>, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, jk@...abs.org, joel@....id.au, eajames@...ux.ibm.com,
andrzej.hajda@...el.com, neil.armstrong@...aro.org, rfoss@...nel.org,
maarten.lankhorst@...ux.intel.com, mripard@...nel.org, tzimmermann@...e.de,
airlied@...il.com, simona@...ll.ch, dmitry.torokhov@...il.com,
mchehab@...nel.org, awalls@...metrocast.net, hverkuil@...all.nl,
miquel.raynal@...tlin.com, richard@....at, vigneshr@...com,
louis.peens@...igine.com, andrew+netdev@...n.ch, davem@...emloft.net,
edumazet@...gle.com, pabeni@...hat.com,
parthiban.veerasooran@...rochip.com, arend.vanspriel@...adcom.com,
johannes@...solutions.net, gregkh@...uxfoundation.org,
jirislaby@...nel.org, akpm@...ux-foundation.org, hpa@...or.com,
alistair@...ple.id.au, linux@...musvillemoes.dk,
Laurent.pinchart@...asonboard.com, jonas@...boo.se,
jernej.skrabec@...il.com, kuba@...nel.org, linux-kernel@...r.kernel.org,
linux-fsi@...ts.ozlabs.org, dri-devel@...ts.freedesktop.org,
linux-input@...r.kernel.org, linux-media@...r.kernel.org,
linux-mtd@...ts.infradead.org, oss-drivers@...igine.com,
netdev@...r.kernel.org, linux-wireless@...r.kernel.org,
brcm80211@...ts.linux.dev, brcm80211-dev-list.pdl@...adcom.com,
linux-serial@...r.kernel.org, bpf@...r.kernel.org, jserv@...s.ncku.edu.tw,
Yu-Chun Lin <eleanor15x@...il.com>
Subject: Re: [PATCH 02/17] bitops: Add generic parity calculation for u64
On Mon, 24 Feb 2025 14:27:03 -0500
Yury Norov <yury.norov@...il.com> wrote:
....
> +#define parity(val) \
> +({ \
> + u64 __v = (val); \
> + int __ret; \
> + switch (BITS_PER_TYPE(val)) { \
> + case 64: \
> + __v ^= __v >> 32; \
> + fallthrough; \
> + case 32: \
> + __v ^= __v >> 16; \
> + fallthrough; \
> + case 16: \
> + __v ^= __v >> 8; \
> + fallthrough; \
> + case 8: \
> + __v ^= __v >> 4; \
> + __ret = (0x6996 >> (__v & 0xf)) & 1; \
> + break; \
> + default: \
> + BUILD_BUG(); \
> + } \
> + __ret; \
> +})
> +
You really don't want to do that!
gcc makes a right hash of it for x86 (32bit).
See https://www.godbolt.org/z/jG8dv3cvs
You do better using a __v32 after the 64bit xor.
Even the 64bit version is probably sub-optimal (both gcc and clang).
The whole lot ends up being a bit single register dependency chain.
You want to do:
mov %eax, %edx
shrl $n, %eax
xor %edx, %eax
so that the 'mov' and 'shrl' can happen in the same clock
(without relying on the register-register move being optimised out).
I dropped in the arm64 for an example of where the magic shift of 6996
just adds an extra instruction.
David
Powered by blists - more mailing lists