[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5C094614.9090608@oberhumer.com>
Date: Thu, 6 Dec 2018 16:53:56 +0100
From: "Markus F.X.J. Oberhumer" <markus@...rhumer.com>
To: Dave Rodgman <dave.rodgman@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Cc: "herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
"davem@...emloft.net" <davem@...emloft.net>,
Matt Sealey <Matt.Sealey@....com>,
"nitingupta910@...il.com" <nitingupta910@...il.com>,
"minchan@...nel.org" <minchan@...nel.org>,
"sergey.senozhatsky.work@...il.com"
<sergey.senozhatsky.work@...il.com>,
"sonnyrao@...gle.com" <sonnyrao@...gle.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
nd <nd@....com>, "sfr@...b.auug.org.au" <sfr@...b.auug.org.au>
Subject: Re: [PATCH 5/8] lib/lzo: fast 8-byte copy on arm64
Acked-by: Markus F.X.J. Oberhumer <markus@...rhumer.com>
On 2018-11-30 15:26, Dave Rodgman wrote:
> From: Matt Sealey <matt.sealey@....com>
>
> Enable faster 8-byte copies on arm64.
>
> Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodgman@arm.com
> Signed-off-by: Matt Sealey <matt.sealey@....com>
> Signed-off-by: Dave Rodgman <dave.rodgman@....com>
> Cc: David S. Miller <davem@...emloft.net>
> Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
> Cc: Herbert Xu <herbert@...dor.apana.org.au>
> Cc: Markus F.X.J. Oberhumer <markus@...rhumer.com>
> Cc: Minchan Kim <minchan@...nel.org>
> Cc: Nitin Gupta <nitingupta910@...il.com>
> Cc: Richard Purdie <rpurdie@...nedhand.com>
> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
> Cc: Sonny Rao <sonnyrao@...gle.com>
> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
> Signed-off-by: Stephen Rothwell <sfr@...b.auug.org.au>
> ---
> lib/lzo/lzodefs.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h
> index c8965dc181df..06fa83a38e0a 100644
> --- a/lib/lzo/lzodefs.h
> +++ b/lib/lzo/lzodefs.h
> @@ -15,7 +15,7 @@
>
> #define COPY4(dst, src) \
> put_unaligned(get_unaligned((const u32 *)(src)), (u32 *)(dst))
> -#if defined(CONFIG_X86_64)
> +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
> #define COPY8(dst, src) \
> put_unaligned(get_unaligned((const u64 *)(src)), (u64 *)(dst))
> #else
>
--
Markus Oberhumer, <markus@...rhumer.com>, http://www.oberhumer.com/
Powered by blists - more mailing lists