[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <osjlhhph7hadq4ovynqeegr3rxliamluo7lvq7gje45hnem4dq@53pwc56v4fah>
Date: Thu, 23 Jan 2025 17:36:40 -0500
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: Eric Biggers <ebiggers@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org, Ard Biesheuvel <ardb@...nel.org>,
Chao Yu <chao@...nel.org>, "Darrick J. Wong" <djwong@...nel.org>,
Geert Uytterhoeven <geert@...ux-m68k.org>, "Martin K. Petersen" <martin.petersen@...cle.com>,
Michael Ellerman <mpe@...erman.id.au>, Theodore Ts'o <tytso@....edu>,
Vinicius Peixoto <vpeixoto@...amp.dev>, WangYuli <wangyuli@...ontech.com>
Subject: Re: [GIT PULL] CRC updates for 6.14
On Wed, Jan 22, 2025 at 11:46:18PM -0800, Eric Biggers wrote:
> On Wed, Jan 22, 2025 at 09:16:33PM -0800, Eric Biggers wrote:
> > As you probably noticed, the other problem is that CRC32 has 4 generic
> > implementations: bit-by-bit, and slice by 1, 4, or 8 bytes.
> >
> > Bit-by-bit is useless. Slice by 4 and slice by 8 are too similar to have both.
> >
> > It's not straightforward to choose between slice by 1 and slice by 4/8, though.
> > When benchmarking slice-by-n, a higher n will always be faster in
> > microbenchmarks (up to about n=16), but the required table size also increases
> > accordingly. E.g., a slice-by-1 CRC32 uses a 1024-byte table, while slice-by-8
> > uses a 8192-byte table. This table is accessed randomly, which is really bad on
> > the dcache, and can be really bad for performance in real world scenarios where
> > the system is bottlenecked on memory.
> >
> > I'm tentatively planning to just say that slice-by-4 is a good enough compromise
> > and have that be the only generic CRC32 implementation.
> >
> > But I need to try an interleaved implementation too, since it's possible that
> > could give the best of both worlds.
>
> Actually, I'm tempted to just provide slice-by-1 (a.k.a. byte-by-byte) as the
> only generic CRC32 implementation. The generic code has become increasingly
> irrelevant due to the arch-optimized code existing. The arch-optimized code
> tends to be 10 to 100 times faster on long messages.
>
> The generic CRC32 code is still needed when the CPU features needed by the arch
> code are unavailable. But that's rare these days. It's also still needed when
> the CPU has no scalar instructions to accelerate the CRC (e.g. on x86_64, the
> "regular" CRC32 as opposed to the Castagnoli CRC32) *and* the message is too
> short for the overhead of saving and restoring the vector registers to be worth
> it -- typically < 64 bytes or so. And it's still needed when the CRC is done in
> a context where vector registers can't be used at all.
>
> But those don't feel like very strong motivations for the huge tables anymore.
> I think the huge tables were really intended for optimizing CRCs of long
> messages back when CPUs didn't have any better way to do it.
Have you by chance been looking at performance of 64 bit crc algorithms,
or have any recommendations there?
I've been starting to consider switching to a 64 bit crc for the
default for bcachefs - and we do want it to be a regular crc so we can
combine/add them together, not one of the newer fancier add/xor based
hashes.
I thought we'd gotten a faster PCLMULQDQ based implementation of a
crc64, but looking again it appears not, hrm.
Powered by blists - more mailing lists