[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <382372a83d1644f8b3a701ff7e14d5f1@AcuMS.aculab.com>
Date: Sun, 10 Nov 2024 19:36:49 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Mikel Rychliski' <mikel@...elr.com>, Thomas Gleixner
<tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov
<bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, "x86@...nel.org"
<x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] x86: Fix off-by-one error in __access_ok
From: Mikel Rychliski
> Sent: 09 November 2024 21:03
>
> We were checking one byte beyond the actual range that would be accessed.
> Originally, valid_user_address would consider the user guard page to be
> valid, so checks including the final accessible byte would still succeed.
Did it allow the entire page or just the first byte?
The test for ignoring small constant sizes rather assumes that accesses
to the guard page are errored (or transfers start with the first byte).
> However, after commit 86e6b1547b3d ("x86: fix user address masking
> non-canonical speculation issue") this is no longer the case.
>
> Update the logic to always consider the final address in the range.
>
> Fixes: 86e6b1547b3d ("x86: fix user address masking non-canonical speculation issue")
> Signed-off-by: Mikel Rychliski <mikel@...elr.com>
> ---
> arch/x86/include/asm/uaccess_64.h | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
> index b0a887209400..3e0eb72c036f 100644
> --- a/arch/x86/include/asm/uaccess_64.h
> +++ b/arch/x86/include/asm/uaccess_64.h
> @@ -100,9 +100,11 @@ static inline bool __access_ok(const void __user *ptr, unsigned long size)
> if (__builtin_constant_p(size <= PAGE_SIZE) && size <= PAGE_SIZE) {
> return valid_user_address(ptr);
> } else {
> - unsigned long sum = size + (__force unsigned long)ptr;
> + unsigned long end = (__force unsigned long)ptr;
>
> - return valid_user_address(sum) && sum >= (__force unsigned long)ptr;
> + if (size)
> + end += size - 1;
> + return valid_user_address(end) && end >= (__force unsigned long)ptr;
Why not:
if (statically_true(size <= PAGE_SIZE) || !size)
return vaid_user_address(ptr);
end = ptr + size - 1;
return ptr <= end && valid_user_address(end);
Although it is questionable whether a zero size should be allowed.
Also, if you assume that the actual copies are 'reasonably sequential',
it is valid to just ignore the length completely.
It also ought to be possible to get the 'size == 0' check out of the common path.
Maybe something like:
if (statically_true(size <= PAGE_SIZE)
return vaid_user_address(ptr);
end = ptr + size - 1;
return (ptr <= end || (end++, !size)) && valid_user_address(end);
You might want a likely() around the <=, but I suspect it makes little
difference on modern x86 (esp. Intel ones).
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists