[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230914-optimize_checksum-v5-1-c95b82a2757e@rivosinc.com>
Date: Thu, 14 Sep 2023 20:49:37 -0700
From: Charlie Jenkins <charlie@...osinc.com>
To: Charlie Jenkins <charlie@...osinc.com>,
Palmer Dabbelt <palmer@...belt.com>,
Conor Dooley <conor@...nel.org>,
Samuel Holland <samuel.holland@...ive.com>,
David Laight <David.Laight@...lab.com>,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc: Paul Walmsley <paul.walmsley@...ive.com>,
Albert Ou <aou@...s.berkeley.edu>
Subject: [PATCH v5 1/4] asm-generic: Improve csum_fold
This csum_fold implementation introduced into arch/arc by Vineet Gupta
is better than the default implementation on at least arc, x86, arm, and
riscv. Using GCC trunk and compiling non-inlined version, this
implementation has 41.6667%, 25%, 16.6667% fewer instructions on
riscv64, x86-64, and arm64 respectively with -O3 optimization.
Signed-off-by: Charlie Jenkins <charlie@...osinc.com>
---
include/asm-generic/checksum.h | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/include/asm-generic/checksum.h b/include/asm-generic/checksum.h
index 43e18db89c14..adab9ac4312c 100644
--- a/include/asm-generic/checksum.h
+++ b/include/asm-generic/checksum.h
@@ -30,10 +30,7 @@ extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
*/
static inline __sum16 csum_fold(__wsum csum)
{
- u32 sum = (__force u32)csum;
- sum = (sum & 0xffff) + (sum >> 16);
- sum = (sum & 0xffff) + (sum >> 16);
- return (__force __sum16)~sum;
+ return (__force __sum16)((~csum - ror32(csum, 16)) >> 16);
}
#endif
--
2.42.0
Powered by blists - more mailing lists