[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f54657f1b5e74ec99cef62228db50dee@AcuMS.aculab.com>
Date: Sat, 29 Aug 2020 13:21:04 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Josh Poimboeuf' <jpoimboe@...hat.com>,
"x86@...nel.org" <x86@...nel.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Linus Torvalds" <torvalds@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>,
Will Deacon <will@...nel.org>,
Dan Williams <dan.j.williams@...el.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Waiman Long <longman@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Cooper <andrew.cooper3@...rix.com>,
Andy Lutomirski <luto@...nel.org>,
Christoph Hellwig <hch@....de>
Subject: RE: [PATCH] x86/uaccess: Use pointer masking to limit uaccess
speculation
From: Josh Poimboeuf
> Sent: 28 August 2020 20:29
>
> On Wed, Aug 19, 2020 at 09:50:06AM -0500, Josh Poimboeuf wrote:
> > The x86 uaccess code uses barrier_nospec() in various places to prevent
> > speculative dereferencing of user-controlled pointers (which might be
> > combined with further gadgets or CPU bugs to leak data).
> >
> > There are some issues with the current implementation:
> >
> > - The barrier_nospec() in copy_from_user() was inadvertently removed
> > with: 4b842e4e25b1 ("x86: get rid of small constant size cases in
> > raw_copy_{to,from}_user()")
> >
> > - copy_to_user() and friends should also have a speculation barrier,
> > because a speculative write to a user-controlled address can still
> > populate the cache line with the original data.
> >
> > - The LFENCE in barrier_nospec() is overkill, when more lightweight user
> > pointer masking can be used instead.
> >
> > Remove all existing barrier_nospec() usage, and instead do user pointer
> > masking, throughout the x86 uaccess code. This is similar to what arm64
> > is already doing.
> >
> > barrier_nospec() is now unused, and can be removed.
> >
> > Fixes: 4b842e4e25b1 ("x86: get rid of small constant size cases in raw_copy_{to,from}_user()")
> > Suggested-by: Will Deacon <will@...nel.org>
> > Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
>
> Ping?
Rereading the patch it looks like a lot of bloat (as well as a
lot of changes).
Does the array_mask even work on 32bit archs where the kernel
base address is 0xc0000000?
I'm sure there is something much simpler.
If access_ok() generates ~0u or 0 without a conditional then
the address can be masked with the result.
So you probably need to change access_ok() to take the address
of the user pointer - so the callers become like:
if (access_ok(&user_buffer, len))
return -EFAULT
__put_user(user_buffer, value);
It would be easier if NULL were guaranteed to be an invalid
user address (is it?).
Then access_ok() could return the modified pointer.
So you get something like:
user_buffer = access_ok(user_buffer, len);
if (!user_buffer)
return -EFAULT.
Provided the 'last' user page is never allocated (it can't
be on i386 due to cpu prefetch issues) something like:
(and with the asm probably all broken)
static inline void __user * access_ok(void __user *b, size_t len)
{
unsigned long x = (long)b | (long)(b + len);
unsigned long lim = 64_bit ? 1u << 63 : 0x40000000;
asm volatile (" add %1, %0\n"
" sbb $0, %0", "=r" (x), "r" (lim));
return (void __user *)(long)b & ~x);
}
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists