[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <163765763802.11128.3296594000945475918.tip-bot2@tip-bot2>
Date: Tue, 23 Nov 2021 08:53:58 -0000
From: "tip-bot2 for Eric Dumazet" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: kernel test robot <lkp@...el.com>,
Eric Dumazet <edumazet@...gle.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: x86/core] x86/csum: Fix compilation error for UM
The following commit has been merged into the x86/core branch of tip:
Commit-ID: 6b2ecb61bb106d3688b315178831ff40d1008591
Gitweb: https://git.kernel.org/tip/6b2ecb61bb106d3688b315178831ff40d1008591
Author: Eric Dumazet <edumazet@...gle.com>
AuthorDate: Thu, 18 Nov 2021 09:52:39 -08:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Tue, 23 Nov 2021 09:45:32 +01:00
x86/csum: Fix compilation error for UM
load_unaligned_zeropad() is not yet universal.
ARCH=um SUBARCH=x86_64 builds do not have it.
When CONFIG_DCACHE_WORD_ACCESS is not set, simply continue
the bisection with 4, 2 and 1 byte steps.
Fixes: df4554cebdaa ("x86/csum: Rewrite/optimize csum_partial()")
Reported-by: kernel test robot <lkp@...el.com>
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/20211118175239.1525650-1-eric.dumazet@gmail.com
---
arch/x86/lib/csum-partial_64.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/arch/x86/lib/csum-partial_64.c b/arch/x86/lib/csum-partial_64.c
index 5ec3562..1eb8f2d 100644
--- a/arch/x86/lib/csum-partial_64.c
+++ b/arch/x86/lib/csum-partial_64.c
@@ -92,6 +92,7 @@ __wsum csum_partial(const void *buff, int len, __wsum sum)
buff += 8;
}
if (len & 7) {
+#ifdef CONFIG_DCACHE_WORD_ACCESS
unsigned int shift = (8 - (len & 7)) * 8;
unsigned long trail;
@@ -101,6 +102,31 @@ __wsum csum_partial(const void *buff, int len, __wsum sum)
"adcq $0,%[res]"
: [res] "+r" (temp64)
: [trail] "r" (trail));
+#else
+ if (len & 4) {
+ asm("addq %[val],%[res]\n\t"
+ "adcq $0,%[res]"
+ : [res] "+r" (temp64)
+ : [val] "r" ((u64)*(u32 *)buff)
+ : "memory");
+ buff += 4;
+ }
+ if (len & 2) {
+ asm("addq %[val],%[res]\n\t"
+ "adcq $0,%[res]"
+ : [res] "+r" (temp64)
+ : [val] "r" ((u64)*(u16 *)buff)
+ : "memory");
+ buff += 2;
+ }
+ if (len & 1) {
+ asm("addq %[val],%[res]\n\t"
+ "adcq $0,%[res]"
+ : [res] "+r" (temp64)
+ : [val] "r" ((u64)*(u8 *)buff)
+ : "memory");
+ }
+#endif
}
result = add32_with_carry(temp64 >> 32, temp64 & 0xffffffff);
if (unlikely(odd)) {
Powered by blists - more mailing lists