[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56488b800d044a7e81efd8b40198a527@AcuMS.aculab.com>
Date: Sat, 29 Aug 2020 19:31:20 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Josh Poimboeuf' <jpoimboe@...hat.com>,
"'x86@...nel.org'" <x86@...nel.org>
CC: "'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>,
"'Linus Torvalds'" <torvalds@...ux-foundation.org>,
'Al Viro' <viro@...iv.linux.org.uk>,
'Will Deacon' <will@...nel.org>,
'Dan Williams' <dan.j.williams@...el.com>,
'Andrea Arcangeli' <aarcange@...hat.com>,
"'Waiman Long'" <longman@...hat.com>,
'Peter Zijlstra' <peterz@...radead.org>,
"'Thomas Gleixner'" <tglx@...utronix.de>,
'Andrew Cooper' <andrew.cooper3@...rix.com>,
'Andy Lutomirski' <luto@...nel.org>,
'Christoph Hellwig' <hch@....de>
Subject: RE: [PATCH] x86/uaccess: Use pointer masking to limit uaccess
speculation
From: David Laight
> Sent: 29 August 2020 14:21
>
> From: Josh Poimboeuf
> > Sent: 28 August 2020 20:29
> >
> > On Wed, Aug 19, 2020 at 09:50:06AM -0500, Josh Poimboeuf wrote:
> > > The x86 uaccess code uses barrier_nospec() in various places to prevent
> > > speculative dereferencing of user-controlled pointers (which might be
> > > combined with further gadgets or CPU bugs to leak data).
> > >
> > > There are some issues with the current implementation:
> > >
> > > - The barrier_nospec() in copy_from_user() was inadvertently removed
> > > with: 4b842e4e25b1 ("x86: get rid of small constant size cases in
> > > raw_copy_{to,from}_user()")
> > >
> > > - copy_to_user() and friends should also have a speculation barrier,
> > > because a speculative write to a user-controlled address can still
> > > populate the cache line with the original data.
> > >
> > > - The LFENCE in barrier_nospec() is overkill, when more lightweight user
> > > pointer masking can be used instead.
> > >
> > > Remove all existing barrier_nospec() usage, and instead do user pointer
> > > masking, throughout the x86 uaccess code. This is similar to what arm64
> > > is already doing.
> > >
> > > barrier_nospec() is now unused, and can be removed.
> > >
> > > Fixes: 4b842e4e25b1 ("x86: get rid of small constant size cases in raw_copy_{to,from}_user()")
> > > Suggested-by: Will Deacon <will@...nel.org>
> > > Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
> >
> > Ping?
>
> Rereading the patch it looks like a lot of bloat (as well as a
> lot of changes).
> Does the array_mask even work on 32bit archs where the kernel
> base address is 0xc0000000?
> I'm sure there is something much simpler.
>
> If access_ok() generates ~0u or 0 without a conditional then
> the address can be masked with the result.
> So you probably need to change access_ok() to take the address
> of the user pointer - so the callers become like:
> if (access_ok(&user_buffer, len))
> return -EFAULT
> __put_user(user_buffer, value);
>
> It would be easier if NULL were guaranteed to be an invalid
> user address (is it?).
> Then access_ok() could return the modified pointer.
> So you get something like:
> user_buffer = access_ok(user_buffer, len);
> if (!user_buffer)
> return -EFAULT.
>
> Provided the 'last' user page is never allocated (it can't
> be on i386 due to cpu prefetch issues) something like:
> (and with the asm probably all broken)
>
> static inline void __user * access_ok(void __user *b, size_t len)
> {
> unsigned long x = (long)b | (long)(b + len);
> unsigned long lim = 64_bit ? 1u << 63 : 0x40000000;
> asm volatile (" add %1, %0\n"
> " sbb $0, %0", "=r" (x), "r" (lim));
> return (void __user *)(long)b & ~x);
> }
Actually, thinking further, if:
1) the access_ok() immediately precedes the user copy (as it should).
2) the user-copies use a sensible 'increasing address' copy.
and
3) there is a 'guard page' between valid user and kernel addresses.
Then access_ok() only need check the base address of the user buffer.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists