[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250629113840.2f319956@pumpkin>
Date: Sun, 29 Jun 2025 11:38:40 +0100
From: David Laight <david.laight.linux@...il.com>
To: cp0613@...ux.alibaba.com
Cc: alex@...ti.fr, aou@...s.berkeley.edu, arnd@...db.de,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org, linux@...musvillemoes.dk,
palmer@...belt.com, paul.walmsley@...ive.com, yury.norov@...il.com
Subject: Re: [PATCH 2/2] bitops: rotate: Add riscv implementation using Zbb
extension
On Sat, 28 Jun 2025 20:08:16 +0800
cp0613@...ux.alibaba.com wrote:
> On Wed, 25 Jun 2025 17:02:34 +0100, david.laight.linux@...il.com wrote:
>
> > Is it even a gain in the zbb case?
> > The "rorw" is only ever going to help full word rotates.
> > Here you might as well do ((word << 8 | word) >> shift).
> >
> > For "rol8" you'd need ((word << 24 | word) 'rol' shift).
> > I still bet the generic code is faster (but see below).
> >
> > Same for 16bit rotates.
> >
> > Actually the generic version is (probably) horrid for everything except x86.
> > See https://www.godbolt.org/z/xTxYj57To
>
> Thanks for your suggestion, this website is very inspiring. According to the
> results, the generic version is indeed the most friendly to x86. I think this
> is also a reason why other architectures should be optimized. Take the riscv64
> ror32 implementation as an example, compare the number of assembly instructions
> of the following two functions:
> ```
> u32 zbb_opt_ror32(u32 word, unsigned int shift)
> {
> asm volatile(
> ".option push\n"
> ".option arch,+zbb\n"
> "rorw %0, %1, %2\n"
> ".option pop\n"
> : "=r" (word) : "r" (word), "r" (shift) :);
>
> return word;
> }
>
> u16 generic_ror32(u16 word, unsigned int shift)
> {
> return (word >> (shift & 31)) | (word << ((-shift) & 31));
> }
> ```
> Their disassembly is:
> ```
> zbb_opt_ror32:
> <+0>: addi sp,sp,-16
> <+2>: sd s0,0(sp)
> <+4>: sd ra,8(sp)
> <+6>: addi s0,sp,16
> <+8>: .insn 4, 0x60b5553b
> <+12>: ld ra,8(sp)
> <+14>: ld s0,0(sp)
> <+16>: sext.w a0,a0
> <+18>: addi sp,sp,16
> <+20>: ret
>
> generic_ror32:
> <+0>: addi sp,sp,-16
> <+2>: andi a1,a1,31
> <+4>: sd s0,0(sp)
> <+6>: sd ra,8(sp)
> <+8>: addi s0,sp,16
> <+10>: negw a5,a1
> <+14>: sllw a5,a0,a5
> <+18>: ld ra,8(sp)
> <+20>: ld s0,0(sp)
> <+22>: srlw a0,a0,a1
> <+26>: or a0,a0,a5
> <+28>: slli a0,a0,0x30
> <+30>: srli a0,a0,0x30
> <+32>: addi sp,sp,16
> <+34>: ret
> ```
> It can be found that the zbb optimized implementation uses fewer instructions,
> even for 16-bit and 8-bit data.
Far too many register spills to stack.
I think you've forgotten to specify -O2
David
Powered by blists - more mailing lists