[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wh0oKkRHHqnft8mHaz5nuZNEspGQ5HW4oPJmGGwmccF1w@mail.gmail.com>
Date: Sun, 24 Nov 2024 14:39:11 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: David Laight <David.Laight@...lab.com>
Cc: Andrew Cooper <andrew.cooper3@...rix.com>, "bp@...en8.de" <bp@...en8.de>,
Josh Poimboeuf <jpoimboe@...nel.org>, "x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Arnd Bergmann <arnd@...nel.org>,
Mikel Rychliski <mikel@...elr.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v2] x86: Allow user accesses to the base of the guard page
On Sun, 24 Nov 2024 at 14:03, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
>
> On Sun, 24 Nov 2024 at 12:49, David Laight <David.Laight@...lab.com> wrote:
> >
> > Doesn't that just need a <= changed to < ?
> > (And possibly of name)
>
> Well, more importantly, it means that we can't use the same helper
> function at all.
Oh, the 'sbb' trick also ends up being one byte off if we increase
USER_PTR_MAX from being the "maximum valid address" to being "one past
the max valid address".
And yeah, none of this *matters*, since "one byte off" is then covered
by the fact that we have that guard page, but this just ends up
annoying me sense of how it all *should* work.
I'm starting to think that instead of changing the USER_PTR_MAX value
(that was selected to be the right constant for things that don't care
about the size), we just say "the slow case of access_ok() takes a
tiny hit".
Kind of like Mikel Rychliski's patch, but we can certainly do better,
ie something like
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -101,8 +101,9 @@ static inline bool __access_ok(..
return valid_user_address(ptr);
} else {
unsigned long sum = size + (__force unsigned long)ptr;
+ unsigned long max = runtime_const_ptr(USER_PTR_MAX)+1;
- return valid_user_address(sum) && sum >= (__force
unsigned long)ptr;
+ return sum <= max && sum >= (__force unsigned long)ptr;
}
}
#define __access_ok __access_ok
would seem like it should work, and now doesn't make the fast-cases be
off by one.
Yes, it adds a "add 1" to the runtime path (instead of the init-time
constant fixup), which is admittedly annoying. But this really
*should* be the slow path.
We do have a few annoying non-constant access_ok() calls in core code.
The iter code uses access_ok + raw_copy_to_user, because it's evil and
bad. I'm really not sure why it does that. I think it's *purely* bad
history, ie we used to do access_ok() far away from the iov_iter copy,
and then did __copy_from_user(), and then at some point it got changed
to have the access_ok() closer, rather than just use
"copy_from_user()".
So I get the feeling that those access_ok() cases could be removed
entirely in favor of just using the right user copy function. Getting
rid of the whole silly size checking thing.
David, does that patch above work for you?
Linus
Powered by blists - more mailing lists