[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <226c88f6446d43afb6d9b5dffda5ab2a@AcuMS.aculab.com>
Date: Sun, 14 Nov 2021 14:44:36 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Eric Dumazet' <edumazet@...gle.com>,
Alexander Duyck <alexander.duyck@...il.com>
CC: Eric Dumazet <eric.dumazet@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
netdev <netdev@...r.kernel.org>,
the arch/x86 maintainers <x86@...nel.org>,
"Peter Zijlstra" <peterz@...radead.org>
Subject: RE: [PATCH v1] x86/csum: rewrite csum_partial()
From: Eric Dumazet
> Sent: 11 November 2021 22:31
..
> That requires an extra add32_with_carry(), which unfortunately made
> the thing slower for me.
>
> I even hardcoded an inline fast_csum_40bytes() and got best results
> with the 10+1 addl,
> instead of
> (5 + 1) acql + mov (needing one extra register) + shift + addl + adcl
Did you try something like:
sum = buf[0];
val = buf[1]:
asm(
add64 sum, val
adc64 sum, buf[2]
adc64 sum, buf[3]
adc64 sum, buf[4]
adc64 sum, 0
}
sum_hi = sum >> 32;
asm(
add32 sum, sum_hi
adc32 sum, 0
)
Splitting it like that should allow thew compiler to insert
additional instructions between the two 'adc' blocks
making it much more likely that the cpu will schedule them
in parallel with other instructions.
The extra 5 adc32 have to add 5 clocks (register dependency chain).
The 'mov' ought to be free (register rename) and the extra shift
and adds one clock each - so 3 (maybe 4) clocks.
So the 64bit version really ought to be faster even a s single
asm block.
Trying to second-guess the x86 cpu is largely impossible :-)
Oh, and then try the benchmarks of one of the 64bit Atom cpus
used in embedded systems....
We've some 4core+hyperthreading ones that aren't exactly slow.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists