lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 11 Mar 2018 12:22:56 -0700
From:   Eric Biggers <ebiggers3@...il.com>
To:     Nikolay Borisov <n.borisov.lkml@...il.com>
Cc:     Andiry Xu <jix024@....ucsd.edu>, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-nvdimm@...ts.01.org,
        dan.j.williams@...el.com, andy.rudoff@...el.com,
        coughlan@...hat.com, swanson@...ucsd.edu, david@...morbit.com,
        jack@...e.com, swhiteho@...hat.com, miklos@...redi.hu,
        andiry.xu@...il.com, Andiry Xu <jix024@...ucsd.edu>,
        herbert@...dor.apana.org.au
Subject: Re: [RFC v2 05/83] Add NOVA filesystem definitions and useful helper
 routines.

On Sun, Mar 11, 2018 at 02:00:13PM +0200, Nikolay Borisov wrote:
> [Adding Herbert Xu to CC since he is the maintainer of the crypto subsys
> maintainer]
> 
> On 10.03.2018 20:17, Andiry Xu wrote:
> <snip>
> 
> > +static inline u32 nova_crc32c(u32 crc, const u8 *data, size_t len)
> > +{
> > +	u8 *ptr = (u8 *) data;
> > +	u64 acc = crc; /* accumulator, crc32c value in lower 32b */
> > +	u32 csum;
> > +
> > +	/* x86 instruction crc32 is part of SSE-4.2 */
> > +	if (static_cpu_has(X86_FEATURE_XMM4_2)) {
> > +		/* This inline assembly implementation should be equivalent
> > +		 * to the kernel's crc32c_intel_le_hw() function used by
> > +		 * crc32c(), but this performs better on test machines.
> > +		 */
> > +		while (len > 8) {
> > +			asm volatile(/* 64b quad words */
> > +				"crc32q (%1), %0"
> > +				: "=r" (acc)
> > +				: "r"  (ptr), "0" (acc)
> > +			);
> > +			ptr += 8;
> > +			len -= 8;
> > +		}
> > +
> > +		while (len > 0) {
> > +			asm volatile(/* trailing bytes */
> > +				"crc32b (%1), %0"
> > +				: "=r" (acc)
> > +				: "r"  (ptr), "0" (acc)
> > +			);
> > +			ptr++;
> > +			len--;
> > +		}
> > +
> > +		csum = (u32) acc;
> > +	} else {
> > +		/* The kernel's crc32c() function should also detect and use the
> > +		 * crc32 instruction of SSE-4.2. But calling in to this function
> > +		 * is about 3x to 5x slower than the inline assembly version on
> > +		 * some test machines.
> 
> That is really odd. Did you try to characterize why this is the case? Is
> it purely the overhead of dispatching to the correct backend function?
> That's a rather big performance hit.
> 
> > +		 */
> > +		csum = crc32c(crc, data, len);
> > +	}
> > +
> > +	return csum;
> > +}
> > +

Are you sure that CONFIG_CRYPTO_CRC32C_INTEL was enabled during your tests and
that the accelerated version was being called?  Or, perhaps CRC32C_PCL_BREAKEVEN
(defined in arch/x86/crypto/crc32c-intel_glue.c) needs to be adjusted.  Please
don't hack around performance problems like this; if they exist, they need to be
fixed for everyone.

Eric

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ