[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250123074618.GB183612@sol.localdomain>
Date: Wed, 22 Jan 2025 23:46:18 -0800
From: Eric Biggers <ebiggers@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org,
Ard Biesheuvel <ardb@...nel.org>, Chao Yu <chao@...nel.org>,
"Darrick J. Wong" <djwong@...nel.org>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Kent Overstreet <kent.overstreet@...ux.dev>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Michael Ellerman <mpe@...erman.id.au>,
Theodore Ts'o <tytso@....edu>,
Vinicius Peixoto <vpeixoto@...amp.dev>,
WangYuli <wangyuli@...ontech.com>
Subject: Re: [GIT PULL] CRC updates for 6.14
On Wed, Jan 22, 2025 at 09:16:33PM -0800, Eric Biggers wrote:
> As you probably noticed, the other problem is that CRC32 has 4 generic
> implementations: bit-by-bit, and slice by 1, 4, or 8 bytes.
>
> Bit-by-bit is useless. Slice by 4 and slice by 8 are too similar to have both.
>
> It's not straightforward to choose between slice by 1 and slice by 4/8, though.
> When benchmarking slice-by-n, a higher n will always be faster in
> microbenchmarks (up to about n=16), but the required table size also increases
> accordingly. E.g., a slice-by-1 CRC32 uses a 1024-byte table, while slice-by-8
> uses a 8192-byte table. This table is accessed randomly, which is really bad on
> the dcache, and can be really bad for performance in real world scenarios where
> the system is bottlenecked on memory.
>
> I'm tentatively planning to just say that slice-by-4 is a good enough compromise
> and have that be the only generic CRC32 implementation.
>
> But I need to try an interleaved implementation too, since it's possible that
> could give the best of both worlds.
Actually, I'm tempted to just provide slice-by-1 (a.k.a. byte-by-byte) as the
only generic CRC32 implementation. The generic code has become increasingly
irrelevant due to the arch-optimized code existing. The arch-optimized code
tends to be 10 to 100 times faster on long messages.
The generic CRC32 code is still needed when the CPU features needed by the arch
code are unavailable. But that's rare these days. It's also still needed when
the CPU has no scalar instructions to accelerate the CRC (e.g. on x86_64, the
"regular" CRC32 as opposed to the Castagnoli CRC32) *and* the message is too
short for the overhead of saving and restoring the vector registers to be worth
it -- typically < 64 bytes or so. And it's still needed when the CRC is done in
a context where vector registers can't be used at all.
But those don't feel like very strong motivations for the huge tables anymore.
I think the huge tables were really intended for optimizing CRCs of long
messages back when CPUs didn't have any better way to do it.
- Eric
Powered by blists - more mailing lists