[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250625170234.29605eed@pumpkin>
Date: Wed, 25 Jun 2025 17:02:34 +0100
From: David Laight <david.laight.linux@...il.com>
To: Yury Norov <yury.norov@...il.com>
Cc: cp0613@...ux.alibaba.com, linux@...musvillemoes.dk, arnd@...db.de,
paul.walmsley@...ive.com, palmer@...belt.com, aou@...s.berkeley.edu,
alex@...ti.fr, linux-riscv@...ts.infradead.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] bitops: rotate: Add riscv implementation using Zbb
extension
On Fri, 20 Jun 2025 12:20:47 -0400
Yury Norov <yury.norov@...il.com> wrote:
> On Fri, Jun 20, 2025 at 07:16:10PM +0800, cp0613@...ux.alibaba.com wrote:
> > From: Chen Pei <cp0613@...ux.alibaba.com>
> >
> > The RISC-V Zbb extension[1] defines bitwise rotation instructions,
> > which can be used to implement rotate related functions.
> >
> > [1] https://github.com/riscv/riscv-bitmanip/
> >
> > Signed-off-by: Chen Pei <cp0613@...ux.alibaba.com>
> > ---
> > arch/riscv/include/asm/bitops.h | 172 ++++++++++++++++++++++++++++++++
> > 1 file changed, 172 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
> > index d59310f74c2b..be247ef9e686 100644
> > --- a/arch/riscv/include/asm/bitops.h
> > +++ b/arch/riscv/include/asm/bitops.h
> > @@ -20,17 +20,20 @@
> > #include <asm-generic/bitops/__fls.h>
> > #include <asm-generic/bitops/ffs.h>
> > #include <asm-generic/bitops/fls.h>
> > +#include <asm-generic/bitops/rotate.h>
> >
> > #else
> > #define __HAVE_ARCH___FFS
> > #define __HAVE_ARCH___FLS
> > #define __HAVE_ARCH_FFS
> > #define __HAVE_ARCH_FLS
> > +#define __HAVE_ARCH_ROTATE
> >
> > #include <asm-generic/bitops/__ffs.h>
> > #include <asm-generic/bitops/__fls.h>
> > #include <asm-generic/bitops/ffs.h>
> > #include <asm-generic/bitops/fls.h>
> > +#include <asm-generic/bitops/rotate.h>
> >
> > #include <asm/alternative-macros.h>
> > #include <asm/hwcap.h>
> > @@ -175,6 +178,175 @@ static __always_inline int variable_fls(unsigned int x)
> > variable_fls(x_); \
> > })
>
> ...
>
> > +static inline u8 variable_ror8(u8 word, unsigned int shift)
> > +{
> > + u32 word32 = ((u32)word << 24) | ((u32)word << 16) | ((u32)word << 8) | word;
>
> Can you add a comment about what is happening here? Are you sure it's
> optimized out in case of the 'legacy' alternative?
Is it even a gain in the zbb case?
The "rorw" is only ever going to help full word rotates.
Here you might as well do ((word << 8 | word) >> shift).
For "rol8" you'd need ((word << 24 | word) 'rol' shift).
I still bet the generic code is faster (but see below).
Same for 16bit rotates.
Actually the generic version is (probably) horrid for everything except x86.
See https://www.godbolt.org/z/xTxYj57To
unsigned char rol(unsigned char v, unsigned int shift)
{
return (v << 8 | v) << shift >> 8;
}
unsigned char ror(unsigned char v, unsigned int shift)
{
return (v << 8 | v) >> shift;
}
David
Powered by blists - more mailing lists