[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D41116831@AcuExch.aculab.com>
Date: Mon, 7 Mar 2016 13:56:01 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Alexander Duyck' <alexander.duyck@...il.com>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
Tom Herbert <tom@...bertland.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
"kernel-team@...com" <kernel-team@...com>
Subject: RE: [PATCH v5 net-next] net: Implement fast csum_partial for x86_64
From: Alexander Duyck
...
> Actually probably the easiest way to go on x86 is to just replace the
> use of len with (len >> 6) and use decl or incl instead of addl or
> subl, and lea instead of addq for the buff address. None of those
> instructions effect the carry flag as this is how such loops were
> intended to be implemented.
>
> I've been doing a bit of testing and that seems to work without
> needing the adcq until after you exit the loop, but doesn't give that
> much of a gain in speed for dropping the instruction from the
> hot-path. I suspect we are probably memory bottle-necked already in
> the loop so dropping an instruction or two doesn't gain you much.
Right, any superscalar architecture gives you some instructions
'for free' if they can execute at the same time as those on the
critical path (in this case the memory reads and the adc).
This is why loop unrolling can be pointless.
So the loop:
10: addc %rax,(%rdx,%rcx,8)
inc %rcx
jnz 10b
could easily be as fast as anything that doesn't use the 'new'
instructions that use the overflow flag.
That loop might be measurable faster for aligned buffers.
David
Powered by blists - more mailing lists