[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4jk_SZGaCtDm39FAdMCo9fQo87gjuAj1xzUCT9isycz9g@mail.gmail.com>
Date: Mon, 14 Sep 2020 12:06:56 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: X86 ML <x86@...nel.org>, Al Viro <viro@...iv.linux.org.uk>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Will Deacon <will@...nel.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Waiman Long <longman@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Cooper <andrew.cooper3@...rix.com>,
Andy Lutomirski <luto@...nel.org>,
Christoph Hellwig <hch@....de>,
David Laight <David.Laight@...lab.com>,
Mark Rutland <mark.rutland@....com>
Subject: Re: [PATCH v3] x86/uaccess: Use pointer masking to limit uaccess speculation
On Thu, Sep 10, 2020 at 10:24 AM Josh Poimboeuf <jpoimboe@...hat.com> wrote:
>
> The x86 uaccess code uses barrier_nospec() in various places to prevent
> speculative dereferencing of user-controlled pointers (which might be
> combined with further gadgets or CPU bugs to leak data).
>
> There are some issues with the current implementation:
>
> - The barrier_nospec() in copy_from_user() was inadvertently removed
> with: 4b842e4e25b1 ("x86: get rid of small constant size cases in
> raw_copy_{to,from}_user()")
>
> - copy_to_user() and friends should also have a speculation barrier,
> because a speculative write to a user-controlled address can still
> populate the cache line with the original data.
>
> - The LFENCE in barrier_nospec() is overkill, when more lightweight user
> pointer masking can be used instead.
>
> Remove all existing barrier_nospec() usage, and instead do user pointer
> masking, throughout the x86 uaccess code. This is similar to what arm64
> is already doing with uaccess_mask_ptr().
>
> barrier_nospec() is now unused, and can be removed.
>
> Fixes: 4b842e4e25b1 ("x86: get rid of small constant size cases in raw_copy_{to,from}_user()")
> Suggested-by: Will Deacon <will@...nel.org>
> Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
> ---
> v3:
>
> - Rebased on vfs#for-next, using TASK_SIZE_MAX now that set_fs() is
> gone. I considered just clearing the most significant bit, but that
> only works for 64-bit, so in the interest of common code I went with
> the more straightforward enforcement of the TASK_SIZE_MAX limit.
>
> - Rename the macro to force_user_ptr(), which is more descriptive, and
> also more distinguishable from a planned future macro for sanitizing
> __user pointers on syscall entry.
>
> Documentation/admin-guide/hw-vuln/spectre.rst | 6 ++--
> arch/x86/include/asm/barrier.h | 3 --
> arch/x86/include/asm/checksum_32.h | 6 ++--
> arch/x86/include/asm/futex.h | 5 +++
> arch/x86/include/asm/uaccess.h | 35 ++++++++++++-------
> arch/x86/include/asm/uaccess_64.h | 16 ++++-----
> arch/x86/lib/csum-wrappers_64.c | 5 +--
> arch/x86/lib/getuser.S | 10 +++---
> arch/x86/lib/putuser.S | 8 +++++
> arch/x86/lib/usercopy_32.c | 6 ++--
> arch/x86/lib/usercopy_64.c | 7 ++--
> lib/iov_iter.c | 2 +-
> 12 files changed, 65 insertions(+), 44 deletions(-)
>
> diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
> index e05e581af5cf..27a8adedd2b8 100644
> --- a/Documentation/admin-guide/hw-vuln/spectre.rst
> +++ b/Documentation/admin-guide/hw-vuln/spectre.rst
> @@ -426,9 +426,9 @@ Spectre variant 1
> <spec_ref2>` to avoid any usable disclosure gadgets. However, it may
> not cover all attack vectors for Spectre variant 1.
>
> - Copy-from-user code has an LFENCE barrier to prevent the access_ok()
> - check from being mis-speculated. The barrier is done by the
> - barrier_nospec() macro.
> + Usercopy code uses user pointer masking to prevent the access_ok()
> + check from being mis-speculated in the success path with a kernel
> + address. The masking is done by the force_user_ptr() macro.
>
> For the swapgs variant of Spectre variant 1, LFENCE barriers are
> added to interrupt, exception and NMI entry where needed. These
> diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
> index 7f828fe49797..d158ea1fa250 100644
> --- a/arch/x86/include/asm/barrier.h
> +++ b/arch/x86/include/asm/barrier.h
> @@ -48,9 +48,6 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
> /* Override the default implementation from linux/nospec.h. */
> #define array_index_mask_nospec array_index_mask_nospec
>
> -/* Prevent speculative execution past this barrier. */
> -#define barrier_nospec() alternative("", "lfence", X86_FEATURE_LFENCE_RDTSC)
> -
> #define dma_rmb() barrier()
> #define dma_wmb() barrier()
>
> diff --git a/arch/x86/include/asm/checksum_32.h b/arch/x86/include/asm/checksum_32.h
> index 17da95387997..c7ebc40c6fb9 100644
> --- a/arch/x86/include/asm/checksum_32.h
> +++ b/arch/x86/include/asm/checksum_32.h
> @@ -49,7 +49,8 @@ static inline __wsum csum_and_copy_from_user(const void __user *src,
> might_sleep();
> if (!user_access_begin(src, len))
> return 0;
> - ret = csum_partial_copy_generic((__force void *)src, dst, len);
> + ret = csum_partial_copy_generic((__force void *)force_user_ptr(src),
> + dst, len);
I look at this and wonder if the open-coded "(__force void *)" should
be subsumed in the new macro. It also feels like the name should be
"enforce" to distinguish it from the type cast case?
> user_access_end();
>
> return ret;
> @@ -177,8 +178,7 @@ static inline __wsum csum_and_copy_to_user(const void *src,
> might_sleep();
> if (!user_access_begin(dst, len))
> return 0;
> -
> - ret = csum_partial_copy_generic(src, (__force void *)dst, len);
> + ret = csum_partial_copy_generic(src, (__force void *)force_user_ptr(dst), len);
> user_access_end();
> return ret;
> }
> diff --git a/arch/x86/include/asm/futex.h b/arch/x86/include/asm/futex.h
> index f9c00110a69a..0cecdaa362b1 100644
> --- a/arch/x86/include/asm/futex.h
> +++ b/arch/x86/include/asm/futex.h
> @@ -59,6 +59,8 @@ static __always_inline int arch_futex_atomic_op_inuser(int op, int oparg, int *o
> if (!user_access_begin(uaddr, sizeof(u32)))
> return -EFAULT;
>
> + uaddr = force_user_ptr(uaddr);
> +
> switch (op) {
> case FUTEX_OP_SET:
> unsafe_atomic_op1("xchgl %0, %2", oval, uaddr, oparg, Efault);
> @@ -94,6 +96,9 @@ static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
>
> if (!user_access_begin(uaddr, sizeof(u32)))
> return -EFAULT;
> +
> + uaddr = force_user_ptr(uaddr);
> +
> asm volatile("\n"
> "1:\t" LOCK_PREFIX "cmpxchgl %4, %2\n"
> "2:\n"
> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> index a4ceda0510ea..d35f6dc22341 100644
> --- a/arch/x86/include/asm/uaccess.h
> +++ b/arch/x86/include/asm/uaccess.h
> @@ -6,6 +6,7 @@
> */
> #include <linux/compiler.h>
> #include <linux/kasan-checks.h>
> +#include <linux/nospec.h>
> #include <linux/string.h>
> #include <asm/asm.h>
> #include <asm/page.h>
> @@ -66,12 +67,23 @@ static inline bool pagefault_disabled(void);
> * Return: true (nonzero) if the memory block may be valid, false (zero)
> * if it is definitely invalid.
> */
> -#define access_ok(addr, size) \
unnecessary whitespace change?
Other than that and the optional s/force/enforce/ rename + cast
collapse you can add:
Reviewed-by: Dan Williams <dan.j.williams@...el.com>
Powered by blists - more mailing lists