[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3518c78fab894d01b391c764efffbb62@AcuMS.aculab.com>
Date: Sun, 14 Nov 2021 14:21:20 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Alexander Duyck' <alexander.duyck@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>
CC: "David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
netdev <netdev@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
the arch/x86 maintainers <x86@...nel.org>,
"Peter Zijlstra" <peterz@...radead.org>
Subject: RE: [PATCH v1] x86/csum: rewrite csum_partial()
From: Alexander Duyck
> Sent: 11 November 2021 21:56
...
> It might be worthwhile to beef up the odd check to account for
> anything 7 bytes or less. To address it you could do something along
> the lines of:
> unaligned = 7 & (unsigned long) buff;
> if (unaligned) {
> shift = unaligned * 8;
> temp64 = (*(unsigned long)buff >> shift) << shift;
> buff += 8 - unaligned;
> if (len < 8 - unaligned) {
> shift = (8 - len - unaligned) * 8;
> temp64 <<= shift;
> temp64 >>= shift;
> len = 0;
> } else {
> len -= 8 - unaligned;
> }
> result += temp64;
> result += result < temp64;
> }
I tried doing that.
Basically it is likely to take longer that just doing the memory reads.
The register dependency chain is just too long.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists