[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1520823972.943057352@decadent.org.uk>
Date: Mon, 12 Mar 2018 03:06:12 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, linux-arch@...r.kernel.org,
gregkh@...uxfoundation.org, "Ingo Molnar" <mingo@...hat.com>,
"Thomas Gleixner" <tglx@...utronix.de>, alan@...ux.intel.com,
"Dan Williams" <dan.j.williams@...el.com>,
"Tom Lendacky" <thomas.lendacky@....com>,
torvalds@...ux-foundation.org, kernel-hardening@...ts.openwall.com,
"Kees Cook" <keescook@...omium.org>,
"Al Viro" <viro@...iv.linux.org.uk>
Subject: [PATCH 3.16 74/76] x86/usercopy: Replace open coded stac/clac
with __uaccess_{begin, end}
3.16.56-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams <dan.j.williams@...el.com>
commit b5c4ae4f35325d520b230bab6eb3310613b72ac1 upstream.
In preparation for converting some __uaccess_begin() instances to
__uacess_begin_nospec(), make sure all 'from user' uaccess paths are
using the _begin(), _end() helpers rather than open-coded stac() and
clac().
No functional changes.
Suggested-by: Ingo Molnar <mingo@...hat.com>
Signed-off-by: Dan Williams <dan.j.williams@...el.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-arch@...r.kernel.org
Cc: Tom Lendacky <thomas.lendacky@....com>
Cc: Kees Cook <keescook@...omium.org>
Cc: kernel-hardening@...ts.openwall.com
Cc: gregkh@...uxfoundation.org
Cc: Al Viro <viro@...iv.linux.org.uk>
Cc: torvalds@...ux-foundation.org
Cc: alan@...ux.intel.com
Link: https://lkml.kernel.org/r/151727416438.33451.17309465232057176966.stgit@dwillia2-desk3.amr.corp.intel.com
[bwh: Backported to 3.16:
- Convert several more functions to use __uaccess_begin_nospec(), that
are just wrappers in mainline
- Adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
arch/x86/lib/usercopy_32.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/x86/lib/usercopy_32.c
+++ b/arch/x86/lib/usercopy_32.c
@@ -570,12 +570,12 @@ do { \
unsigned long __copy_to_user_ll(void __user *to, const void *from,
unsigned long n)
{
- stac();
+ __uaccess_begin();
if (movsl_is_ok(to, from, n))
__copy_user(to, from, n);
else
n = __copy_user_intel(to, from, n);
- clac();
+ __uaccess_end();
return n;
}
EXPORT_SYMBOL(__copy_to_user_ll);
@@ -583,12 +583,12 @@ EXPORT_SYMBOL(__copy_to_user_ll);
unsigned long __copy_from_user_ll(void *to, const void __user *from,
unsigned long n)
{
- stac();
+ __uaccess_begin();
if (movsl_is_ok(to, from, n))
__copy_user_zeroing(to, from, n);
else
n = __copy_user_zeroing_intel(to, from, n);
- clac();
+ __uaccess_end();
return n;
}
EXPORT_SYMBOL(__copy_from_user_ll);
@@ -596,13 +596,13 @@ EXPORT_SYMBOL(__copy_from_user_ll);
unsigned long __copy_from_user_ll_nozero(void *to, const void __user *from,
unsigned long n)
{
- stac();
+ __uaccess_begin();
if (movsl_is_ok(to, from, n))
__copy_user(to, from, n);
else
n = __copy_user_intel((void __user *)to,
(const void *)from, n);
- clac();
+ __uaccess_end();
return n;
}
EXPORT_SYMBOL(__copy_from_user_ll_nozero);
@@ -610,7 +610,7 @@ EXPORT_SYMBOL(__copy_from_user_ll_nozero
unsigned long __copy_from_user_ll_nocache(void *to, const void __user *from,
unsigned long n)
{
- stac();
+ __uaccess_begin();
#ifdef CONFIG_X86_INTEL_USERCOPY
if (n > 64 && cpu_has_xmm2)
n = __copy_user_zeroing_intel_nocache(to, from, n);
@@ -619,7 +619,7 @@ unsigned long __copy_from_user_ll_nocach
#else
__copy_user_zeroing(to, from, n);
#endif
- clac();
+ __uaccess_end();
return n;
}
EXPORT_SYMBOL(__copy_from_user_ll_nocache);
@@ -627,7 +627,7 @@ EXPORT_SYMBOL(__copy_from_user_ll_nocach
unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *from,
unsigned long n)
{
- stac();
+ __uaccess_begin();
#ifdef CONFIG_X86_INTEL_USERCOPY
if (n > 64 && cpu_has_xmm2)
n = __copy_user_intel_nocache(to, from, n);
@@ -636,7 +636,7 @@ unsigned long __copy_from_user_ll_nocach
#else
__copy_user(to, from, n);
#endif
- clac();
+ __uaccess_end();
return n;
}
EXPORT_SYMBOL(__copy_from_user_ll_nocache_nozero);
Powered by blists - more mailing lists