[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1652241268-46732-2-git-send-email-jdamato@fastly.com>
Date: Tue, 10 May 2022 20:54:22 -0700
From: Joe Damato <jdamato@...tly.com>
To: netdev@...r.kernel.org, davem@...emloft.net, kuba@...nel.org,
linux-kernel@...r.kernel.org, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>
Cc: Joe Damato <jdamato@...tly.com>
Subject: [RFC,net-next,x86 1/6] arch, x86, uaccess: Add nontemporal copy functions
Add a generic non-temporal wrapper to uaccess which can be overridden by
arches that support non-temporal copies.
An implementation is added for x86 which wraps an existing non-temporal
copy in the kernel.
Signed-off-by: Joe Damato <jdamato@...tly.com>
---
arch/x86/include/asm/uaccess_64.h | 6 ++++++
include/linux/uaccess.h | 6 ++++++
2 files changed, 12 insertions(+)
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 45697e0..ed41dba 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -65,6 +65,12 @@ extern long __copy_user_flushcache(void *dst, const void __user *src, unsigned s
extern void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
size_t len);
+static inline unsigned long
+__copy_from_user_nocache(void *dst, const void __user *src, unsigned long size)
+{
+ return (unsigned long)__copy_user_nocache(dst, src, (unsigned int) size, 0);
+}
+
static inline int
__copy_from_user_inatomic_nocache(void *dst, const void __user *src,
unsigned size)
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 5461794..d1f57a1 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -234,6 +234,12 @@ static inline bool pagefault_disabled(void)
#ifndef ARCH_HAS_NOCACHE_UACCESS
static inline __must_check unsigned long
+__copy_from_user_nocache(void *to, const void __user *from, unsigned long n)
+{
+ return __copy_from_user(to, from, n);
+}
+
+static inline __must_check unsigned long
__copy_from_user_inatomic_nocache(void *to, const void __user *from,
unsigned long n)
{
--
2.7.4
Powered by blists - more mailing lists