[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a081958f-0ae8-4b8b-b49f-81378f3c05a7@iencinas.com>
Date: Sat, 8 Mar 2025 13:58:44 +0100
From: Ignacio Encinas Rubio <ignacio@...cinas.com>
To: Eric Biggers <ebiggers@...nel.org>, Björn Töpel
<bjorn@...nel.org>, Palmer Dabbelt <palmer@...belt.com>
Cc: linux-kernel-mentees@...ts.linux.dev, linux-kernel@...r.kernel.org,
linux-crypto@...r.kernel.org, linux-riscv@...ts.infradead.org,
Zhihang Shao <zhihang.shao.iscas@...il.com>, Ard Biesheuvel
<ardb@...nel.org>, Xiao Wang <xiao.w.wang@...el.com>,
Charlie Jenkins <charlie@...osinc.com>,
Alexandre Ghiti <alexghiti@...osinc.com>, skhan@...uxfoundation.org
Subject: Re: [PATCH 0/4] RISC-V CRC optimizations
Hello!
On 2/3/25 23:04, Eric Biggers wrote:
> So, quite positive results. Though, the fact the msb-first CRCs are (still) so
> much slower than lsb-first ones indicates that be64_to_cpu() is super slow on
> RISC-V. That seems to be caused by the rev8 instruction from Zbb not being
> used. I wonder if there are any plans to make the endianness swap macros use
> rev8, or if I'm going to have to roll my own endianness swap in the CRC code.
> (I assume it would be fine for the CRC code to depend on both Zbb and Zbc.)
I saw this message the other day and started working on a patch, but I
would like to double-check I'm on the right track:
- be64_to_cpu ends up being __swab64 (include/uapi/linux/swab.h)
If Zbb was part of the base ISA, turning CONFIG_ARCH_USE_BUILTIN_BSWAP
would take care of the problem, but it is not the case.
Therefore, we have to define __arch_swab<X> like some "arches"(?) do in
arch/<ARCH>/include/uapi/asm/swab.h
For those functions to be correct in generic kernels, we would need to
use ALTERNATIVE() macros like in arch/riscv/include/asm/bitops.h.
Would this be ok? I'm not sure if the ALTERNATIVEs overhead can be a
problem here.
Thanks in advance :)
Powered by blists - more mailing lists