[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <794337eff7bb803d2c4e67d9eee635390c4c48fe.1646812553.git.christophe.leroy@csgroup.eu>
Date: Wed, 9 Mar 2022 08:56:14 +0100
From: Christophe Leroy <christophe.leroy@...roup.eu>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>
Cc: Christophe Leroy <christophe.leroy@...roup.eu>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: [PATCH] powerpc: Use rol32() instead of opencoding in csum_fold()
rol32(x, 16) will do the rotate using rlwinm.
No need to open code using inline assembly.
Signed-off-by: Christophe Leroy <christophe.leroy@...roup.eu>
---
arch/powerpc/include/asm/checksum.h | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/include/asm/checksum.h b/arch/powerpc/include/asm/checksum.h
index 8321f6053a67..4b573a3b7e17 100644
--- a/arch/powerpc/include/asm/checksum.h
+++ b/arch/powerpc/include/asm/checksum.h
@@ -38,14 +38,15 @@ extern __wsum csum_and_copy_to_user(const void *src, void __user *dst,
*/
static inline __sum16 csum_fold(__wsum sum)
{
- unsigned int tmp;
-
- /* swap the two 16-bit halves of sum */
- __asm__("rlwinm %0,%1,16,0,31" : "=r" (tmp) : "r" (sum));
- /* if there is a carry from adding the two 16-bit halves,
- it will carry from the lower half into the upper half,
- giving us the correct sum in the upper half. */
- return (__force __sum16)(~((__force u32)sum + tmp) >> 16);
+ u32 tmp = (__force u32)sum;
+
+ /*
+ * swap the two 16-bit halves of sum
+ * if there is a carry from adding the two 16-bit halves,
+ * it will carry from the lower half into the upper half,
+ * giving us the correct sum in the upper half.
+ */
+ return (__force __sum16)(~(tmp + rol32(tmp, 16)) >> 16);
}
static inline u32 from64to32(u64 x)
--
2.34.1
Powered by blists - more mailing lists