lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 8 Mar 2016 11:11:37 +0000
From:	David Laight <David.Laight@...LAB.COM>
To:	'Alexander Duyck' <alexander.duyck@...il.com>,
	Tom Herbert <tom@...bertland.com>
CC:	Linus Torvalds <torvalds@...ux-foundation.org>,
	"davem@...emloft.net" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	"mingo@...hat.com" <mingo@...hat.com>,
	"hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
	"kernel-team@...com" <kernel-team@...com>
Subject: RE: [PATCH v5 net-next] net: Implement fast csum_partial for x86_64

From: Alexander Duyck 
...
> >> So the loop:
> >> 10:     addc %rax,(%rdx,%rcx,8)
> >>         inc %rcx
> >>         jnz 10b
> >> could easily be as fast as anything that doesn't use the 'new'
> >> instructions that use the overflow flag.
> >> That loop might be measurable faster for aligned buffers.
> >
> > Tested by replacing the unrolled loop in my patch with just:
> >
> > if (len >= 8) {
> >                 asm("clc\n\t"
> >                     "0: adcq (%[src],%%rcx,8),%[res]\n\t"
> >                     "decl %%ecx\n\t"
> >                     "jge 0b\n\t"
> >                     "adcq $0, %[res]\n\t"
> >                             : [res] "=r" (result)
> >                             : [src] "r" (buff), "[res]" (result), "c"
> > ((len >> 3) - 1));
> > }
> >
> > This seems to be significantly slower:
> >
> > 1400 bytes: 797 nsecs vs. 202 nsecs
> > 40 bytes: 6.5 nsecs vs. 26.8 nsecs
> 
> You still need the loop unrolling as the decl and jge have some
> overhead.  You can't just get rid of it with a single call in a tight
> loop but it should improve things.  The gain from what I have seen
> ends up being minimal though.  I haven't really noticed all that much
> in my tests anyway.

The overhead from the jge and decl is probably similar to that of the adc.
The problem is that they can't be executed at the same time because they
both have dependencies on the carry flag.

Tom did some extra tests last night, using a loop construct of 4 instructions
that didn't modify the flags register was twice the speed of the above.
I think there is a 3 instruction loop, add a second adc and it may well
be as fast as your 8-way unrolled version - but is much simpler to implement.

That loop (using swab and jecxz to loop until the high 32bits of rcx are non-zero)
could also be used with the 'add carry using overflow bit' instructions.

	David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ