[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wik4GHHXNXgzK-4S=yK=7BsNnrvEnSX3Funu6BFr=Pryw@mail.gmail.com>
Date: Sun, 24 Nov 2024 10:52:36 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: David Laight <David.Laight@...lab.com>
Cc: Andrew Cooper <andrew.cooper3@...rix.com>, "bp@...en8.de" <bp@...en8.de>,
Josh Poimboeuf <jpoimboe@...nel.org>, "x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Arnd Bergmann <arnd@...nel.org>,
Mikel Rychliski <mikel@...elr.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v2] x86: Allow user accesses to the base of the guard page
On Sun, 24 Nov 2024 at 07:39, David Laight <David.Laight@...lab.com> wrote:
>
> v2: Rewritten commit message.
Grr. Now I remember why I did it this way - I started looking around
for the bigger context and history.
I wanted that "valid_user_address()" to really be "is this a valid
user address", because it's also used by the fault handling code (for
that reason).
And that means that I wanted valid_user_address() to be the actual
"this address is accessible".
But then it also gets used by that nasty
unsigned long sum = size + (__force unsigned long)ptr;
return valid_user_address(sum) && sum >= (__force
unsigned long)ptr;
case in __access_ok(), and there "sum" is indeed that "possibly one
past the last valid user address".
I really would want to just remove that size-based horror as per the
comment above it all:
* In fact, we could probably remove the size check entirely, since
* any kernel accesses will be in increasing address order starting
* at 'ptr'.
and that would make this all go away, and that was why I was
(incorrectly) fixating on the zero-sized access at the end of the
address space, because I wasn't even thinking about this part of
__access_ok().
IOW, my *preferred* fix for this all would actually look like this:
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -86,24 +86,12 @@ static inline void __user
*mask_user_address(const void __user *ptr)
*
* Note that we always have at least one guard page between the
* max user address and the non-canonical gap, allowing us to
- * ignore small sizes entirely.
- *
- * In fact, we could probably remove the size check entirely, since
- * any kernel accesses will be in increasing address order starting
- * at 'ptr'.
- *
- * That's a separate optimization, for now just handle the small
- * constant case.
+ * ignore the size entirely, since any kernel accesses will be in
+ * increasing address order starting at 'ptr'.
*/
static inline bool __access_ok(const void __user *ptr, unsigned long size)
{
- if (__builtin_constant_p(size <= PAGE_SIZE) && size <= PAGE_SIZE) {
- return valid_user_address(ptr);
- } else {
- unsigned long sum = size + (__force unsigned long)ptr;
-
- return valid_user_address(sum) && sum >= (__force
unsigned long)ptr;
- }
+ return valid_user_address(ptr);
}
#define __access_ok __access_ok
but I suspect that I'm too chicken to actually do that.
Please somebody convince me.
Linus "Bawk bawk bawk" Torvalds
Powered by blists - more mailing lists