[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211118182221.GI174703@worktop.programming.kicks-ass.net>
Date: Thu, 18 Nov 2021 19:22:21 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: linux-kernel <linux-kernel@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
Alexander Duyck <alexanderduyck@...com>,
Johannes Berg <johannes.berg@...el.com>,
kernel test robot <lkp@...el.com>
Subject: Re: [PATCH] x86/csum: fix compilation error for UM
On Thu, Nov 18, 2021 at 09:52:39AM -0800, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@...gle.com>
>
> load_unaligned_zeropad() is not yet universal.
>
> ARCH=um SUBARCH=x86_64 builds do not have it.
>
> When CONFIG_DCACHE_WORD_ACCESS is not set, simply continue
> the bisection with 4, 2 and 1 byte steps.
>
> Fixes: df4554cebdaa ("x86/csum: Rewrite/optimize csum_partial()")
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
> Cc: Alexander Duyck <alexanderduyck@...com>
> Cc: Johannes Berg <johannes.berg@...el.com>
> Reported-by: kernel test robot <lkp@...el.com>
> ---
Yeah, that's much nicer. I'll go feed that to the robots I suppose :-)
> arch/x86/lib/csum-partial_64.c | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/arch/x86/lib/csum-partial_64.c b/arch/x86/lib/csum-partial_64.c
> index 5ec35626945b6db2f7f41c6d46d5e422810eac46..1eb8f2d11f7c785be624eba315fe9ca7989fd56d 100644
> --- a/arch/x86/lib/csum-partial_64.c
> +++ b/arch/x86/lib/csum-partial_64.c
> @@ -92,6 +92,7 @@ __wsum csum_partial(const void *buff, int len, __wsum sum)
> buff += 8;
> }
> if (len & 7) {
> +#ifdef CONFIG_DCACHE_WORD_ACCESS
> unsigned int shift = (8 - (len & 7)) * 8;
> unsigned long trail;
>
> @@ -101,6 +102,31 @@ __wsum csum_partial(const void *buff, int len, __wsum sum)
> "adcq $0,%[res]"
> : [res] "+r" (temp64)
> : [trail] "r" (trail));
> +#else
> + if (len & 4) {
> + asm("addq %[val],%[res]\n\t"
> + "adcq $0,%[res]"
> + : [res] "+r" (temp64)
> + : [val] "r" ((u64)*(u32 *)buff)
> + : "memory");
> + buff += 4;
> + }
> + if (len & 2) {
> + asm("addq %[val],%[res]\n\t"
> + "adcq $0,%[res]"
> + : [res] "+r" (temp64)
> + : [val] "r" ((u64)*(u16 *)buff)
> + : "memory");
> + buff += 2;
> + }
> + if (len & 1) {
> + asm("addq %[val],%[res]\n\t"
> + "adcq $0,%[res]"
> + : [res] "+r" (temp64)
> + : [val] "r" ((u64)*(u8 *)buff)
> + : "memory");
> + }
> +#endif
> }
> result = add32_with_carry(temp64 >> 32, temp64 & 0xffffffff);
> if (unlikely(odd)) {
> --
> 2.34.0.rc1.387.gb447b232ab-goog
>
Powered by blists - more mailing lists