[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180205182140.266128654@linuxfoundation.org>
Date: Mon, 5 Feb 2018 10:23:00 -0800
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-arch@...r.kernel.org, Tom Lendacky <thomas.lendacky@....com>,
Kees Cook <keescook@...omium.org>,
kernel-hardening@...ts.openwall.com,
Al Viro <viro@...iv.linux.org.uk>,
torvalds@...ux-foundation.org, alan@...ux.intel.com
Subject: [PATCH 4.14 41/64] x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams dan.j.williams@...el.com
commit b5c4ae4f35325d520b230bab6eb3310613b72ac1
In preparation for converting some __uaccess_begin() instances to
__uacess_begin_nospec(), make sure all 'from user' uaccess paths are
using the _begin(), _end() helpers rather than open-coded stac() and
clac().
No functional changes.
Suggested-by: Ingo Molnar <mingo@...hat.com>
Signed-off-by: Dan Williams <dan.j.williams@...el.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-arch@...r.kernel.org
Cc: Tom Lendacky <thomas.lendacky@....com>
Cc: Kees Cook <keescook@...omium.org>
Cc: kernel-hardening@...ts.openwall.com
Cc: gregkh@...uxfoundation.org
Cc: Al Viro <viro@...iv.linux.org.uk>
Cc: torvalds@...ux-foundation.org
Cc: alan@...ux.intel.com
Link: https://lkml.kernel.org/r/151727416438.33451.17309465232057176966.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/x86/lib/usercopy_32.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/x86/lib/usercopy_32.c
+++ b/arch/x86/lib/usercopy_32.c
@@ -331,12 +331,12 @@ do { \
unsigned long __copy_user_ll(void *to, const void *from, unsigned long n)
{
- stac();
+ __uaccess_begin();
if (movsl_is_ok(to, from, n))
__copy_user(to, from, n);
else
n = __copy_user_intel(to, from, n);
- clac();
+ __uaccess_end();
return n;
}
EXPORT_SYMBOL(__copy_user_ll);
@@ -344,7 +344,7 @@ EXPORT_SYMBOL(__copy_user_ll);
unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *from,
unsigned long n)
{
- stac();
+ __uaccess_begin();
#ifdef CONFIG_X86_INTEL_USERCOPY
if (n > 64 && static_cpu_has(X86_FEATURE_XMM2))
n = __copy_user_intel_nocache(to, from, n);
@@ -353,7 +353,7 @@ unsigned long __copy_from_user_ll_nocach
#else
__copy_user(to, from, n);
#endif
- clac();
+ __uaccess_end();
return n;
}
EXPORT_SYMBOL(__copy_from_user_ll_nocache_nozero);
Powered by blists - more mailing lists