[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b718357a6f9441428e771f1a4b60d710@AcuMS.aculab.com>
Date: Tue, 12 Nov 2024 09:52:59 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Mikel Rychliski' <mikel@...elr.com>, Thomas Gleixner
<tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov
<bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, "x86@...nel.org"
<x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] x86: Fix off-by-one error in __access_ok
From: Mikel Rychliski
> Sent: 11 November 2024 18:33
>
> Hi David,
>
> Thanks for the review:
>
> On Sunday, November 10, 2024 2:36:49 P.M. EST David Laight wrote:
> > From: Mikel Rychliski
> >
> > > Sent: 09 November 2024 21:03
> > >
> > > We were checking one byte beyond the actual range that would be accessed.
> > > Originally, valid_user_address would consider the user guard page to be
> > > valid, so checks including the final accessible byte would still succeed.
> >
> > Did it allow the entire page or just the first byte?
> > The test for ignoring small constant sizes rather assumes that accesses
> > to the guard page are errored (or transfers start with the first byte).
> >
>
> valid_user_address() allowed the whole guard page. __access_ok() was
> inconsistent about ranges including the guard page (and, as you mention, would
> continue to be with this change).
>
> The problem is before 86e6b1547b3d, the off-by-one calculation just lead to
> another harmless inconsistency in checks including the guard page. Now it
> prohibits reads of the last mapped userspace byte.
So if you could find code that didn't read the first byte of a short buffer
first you could access the first page of kernel memory.
(Ignoring the STAC/CLAC instructions.)
So that has always been wrong!
OTOH I suspect that all user accesses start with the first byte
and are either 'reasonably sequential' or recheck an updated pointer.
So an architecture with a guard page (not all do) need only check
the base address of a user buffer for being below/equal to the
guard page.
...
> > Why not:
> > if (statically_true(size <= PAGE_SIZE) || !size)
> > return vaid_user_address(ptr);
> > end = ptr + size - 1;
> > return ptr <= end && valid_user_address(end);
>
> Sure, agree this works as well.
But is likely to replicate the valid_user_address() code.
> > Although it is questionable whether a zero size should be allowed.
> > Also, if you assume that the actual copies are 'reasonably sequential',
> > it is valid to just ignore the length completely.
> >
> > It also ought to be possible to get the 'size == 0' check out of the common
> > path. Maybe something like:
> > if (statically_true(size <= PAGE_SIZE)
> > return vaid_user_address(ptr);
> > end = ptr + size - 1;
> > return (ptr <= end || (end++, !size)) && valid_user_address(end);
>
> The first issue I ran into with the size==0 is that __import_iovec() is
> checking access for vectors with io_len==0 (and the check needs to succeed,
> otherwise userspace will get a -EFAULT). Not sure if there are others.
I've looked at __import_iovec() in the past.
The API is horrid! and the 32bit compat version is actually faster.
It doesn't need to call access_ok() either the check is done later.
> Similarly, the iovec case is depending on access_ok(0, 0) succeeding. So with
> the example here, end underflows and gets rejected.
I've even wondered what the actual issue is with speculative kernel
reads from get_user().
The read itself can't be an issue (a valid user address will also displace
any cache lines), so I think the value read must be used to form an
address in order for any kernel data to be leaked.
You might find a compare (eg the length in import_iovec() but that can
only expose high bits of a byte - and probably requires i-cache timing.
But I'm not expert - and the experts hide the fine details.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists