[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181130142600.13782-6-dave.rodgman@arm.com>
Date: Fri, 30 Nov 2018 14:26:28 +0000
From: Dave Rodgman <dave.rodgman@....com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
CC: "herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
"davem@...emloft.net" <davem@...emloft.net>,
Matt Sealey <Matt.Sealey@....com>,
"nitingupta910@...il.com" <nitingupta910@...il.com>,
"markus@...rhumer.com" <markus@...rhumer.com>,
"minchan@...nel.org" <minchan@...nel.org>,
"sergey.senozhatsky.work@...il.com"
<sergey.senozhatsky.work@...il.com>,
"sonnyrao@...gle.com" <sonnyrao@...gle.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
nd <nd@....com>, "sfr@...b.auug.org.au" <sfr@...b.auug.org.au>
Subject: [PATCH 5/8] lib/lzo: fast 8-byte copy on arm64
From: Matt Sealey <matt.sealey@....com>
Enable faster 8-byte copies on arm64.
Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodgman@arm.com
Signed-off-by: Matt Sealey <matt.sealey@....com>
Signed-off-by: Dave Rodgman <dave.rodgman@....com>
Cc: David S. Miller <davem@...emloft.net>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Markus F.X.J. Oberhumer <markus@...rhumer.com>
Cc: Minchan Kim <minchan@...nel.org>
Cc: Nitin Gupta <nitingupta910@...il.com>
Cc: Richard Purdie <rpurdie@...nedhand.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Sonny Rao <sonnyrao@...gle.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Stephen Rothwell <sfr@...b.auug.org.au>
---
lib/lzo/lzodefs.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h
index c8965dc181df..06fa83a38e0a 100644
--- a/lib/lzo/lzodefs.h
+++ b/lib/lzo/lzodefs.h
@@ -15,7 +15,7 @@
#define COPY4(dst, src) \
put_unaligned(get_unaligned((const u32 *)(src)), (u32 *)(dst))
-#if defined(CONFIG_X86_64)
+#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
#define COPY8(dst, src) \
put_unaligned(get_unaligned((const u64 *)(src)), (u64 *)(dst))
#else
--
2.17.1
Powered by blists - more mailing lists