lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250305191608.GA19889@sol.localdomain>
Date: Wed, 5 Mar 2025 11:16:08 -0800
From: Eric Biggers <ebiggers@...nel.org>
To: David Laight <david.laight.linux@...il.com>
Cc: linux-kernel@...r.kernel.org, Bill Wendling <morbo@...gle.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
	Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
	"H . Peter Anvin" <hpa@...or.com>, Ard Biesheuvel <ardb@...nel.org>,
	Nathan Chancellor <nathan@...nel.org>,
	Nick Desaulniers <nick.desaulniers+lkml@...il.com>,
	Justin Stitt <justinstitt@...gle.com>, linux-crypto@...r.kernel.org,
	llvm@...ts.linux.dev
Subject: Re: [PATCH] x86/crc32: optimize tail handling for crc32c short inputs

On Wed, Mar 05, 2025 at 02:26:53PM +0000, David Laight wrote:
> On Tue,  4 Mar 2025 13:32:16 -0800
> Eric Biggers <ebiggers@...nel.org> wrote:
> 
> > From: Eric Biggers <ebiggers@...gle.com>
> > 
> > For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
> > do a 4-2-1 step-down instead of a byte-at-a-time loop.  This allows
> > taking advantage of wider CRC instructions.  Note that crc32c-3way.S
> > already uses this same optimization too.
> 
> An alternative is to add extra zero bytes at the start of the buffer.
> They don't affect the crc and just need the first 8 bytes shifted left.
> 
> I think any non-zero 'crc-in' just needs to be xor'ed over the first
> 4 actual data bytes.
> (It's over 40 years since I did the maths of CRC.)
> 
> You won't notice the misaligned accesses all down the buffer.
> When I was testing different ipcsum code misaligned buffers
> cost less than 1 clock per cache line.
> I think that was even true for the versions that managed 12 bytes
> per clock (including the one Linus committed).
> 
> 	David

Sure, but that only works when len >= sizeof(unsigned long).  Also, the initial
CRC sometimes has to be divided between two unsigned longs.

The following implements this, and you can play around with it a bit if you
want.  There may be a way to optimize it a bit more.

But I think you'll find it's a bit more complex than you thought.

I think I'd like to stay with the shorter and simpler 4-2-1 step-down.

u32 crc32c_arch(u32 crc, const u8 *p, size_t len)
{
	if (!static_branch_likely(&have_crc32))
		return crc32c_base(crc, p, len);

	if (IS_ENABLED(CONFIG_X86_64) && len >= CRC32C_PCLMUL_BREAKEVEN &&
	    static_branch_likely(&have_pclmulqdq) && crypto_simd_usable()) {
		kernel_fpu_begin();
		crc = crc32c_x86_3way(crc, p, len);
		kernel_fpu_end();
		return crc;
	}

	if (len % sizeof(unsigned long) != 0) {
		unsigned long msgpoly;
		u32 orig_crc = crc;

		if (len < sizeof(unsigned long)) {
			if (sizeof(unsigned long) > 4 && (len & 4)) {
				asm("crc32l %1, %0"
				    : "+r" (crc) : ASM_INPUT_RM (*(u32 *)p));
				p += 4;
			}
			if (len & 2) {
				asm("crc32w %1, %0"
				    : "+r" (crc) : ASM_INPUT_RM (*(u16 *)p));
				p += 2;
			}
			if (len & 1)
				asm("crc32b %1, %0"
				    : "+r" (crc) : ASM_INPUT_RM (*p));
			return crc;
		}
		msgpoly = (get_unaligned((unsigned long *)p) ^ orig_crc) <<
			  (8 * (-len % sizeof(unsigned long)));
		p += len % sizeof(unsigned long);
		crc = 0;
		asm(CRC32_INST : "+r" (crc) : "r" (msgpoly));

		msgpoly = get_unaligned((unsigned long *)p) ^
			  (orig_crc >> (8 * (len % sizeof(unsigned long))));
		p += sizeof(unsigned long);
		len -= (len % sizeof(unsigned long)) + sizeof(unsigned long);
		asm(CRC32_INST : "+r" (crc) : "r" (msgpoly));
	}

	for (len /= sizeof(unsigned long); len != 0;
	     len--, p += sizeof(unsigned long))
		asm(CRC32_INST : "+r" (crc) : ASM_INPUT_RM (*(unsigned long *)p));

	return crc;
}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ