lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D0F6C9A35@AcuExch.aculab.com>
Date:	Mon, 24 Feb 2014 14:07:16 +0000
From:	David Laight <David.Laight@...LAB.COM>
To:	'Daniel Borkmann' <dborkman@...hat.com>
CC:	"davem@...emloft.net" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"linux-sctp@...r.kernel.org" <linux-sctp@...r.kernel.org>
Subject: RE: [PATCH net-next] loopback: sctp: add NETIF_F_SCTP_CSUM to
 device features

From: Daniel Borkmann
> > Which architecture and which version of crc32_le() does your kernel use?
> 
> It's using slice-by-8 algorithm, and my machine is [only]
> a core i7 (x86_64). Apparently, it is not using the crypto
> version that is accelerated.

I still can't imagine it being that slow.
The sctp code isn't known for its fast paths, although these are
per-message, not per byte I'd have thought they'd still be significant.

Having looked at the code, try compiling with CRC_LE_BITS == 8.
I suspect the much smaller data cache footprint will help, and
the number of instructions won't be that many less (due to all
the shifts).

What may help is to have a 'double shift' table and compute
the crc's of the odd and even bytes separately, then xor them
together at the end. That might improve the dependency chains
and let all the cpu execute more instructions in parallel.

Another improvement is to read the next byte from the buffer
before processing the previous one (possibly with a slight
loop unroll).

As has been noted elsewhere, using a 16 entry lookup table
can be faster than the 256 entry one.
What I don't know is whether the crc32 can be analysed down do
a small number of shifts and xors (crc16 reduces quite nicely)
that might easily execute faster than the lookup table.

	David



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ