[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250611223710.254780d8@pumpkin>
Date: Wed, 11 Jun 2025 22:37:10 +0100
From: David Laight <david.laight.linux@...il.com>
To: Christophe Leroy <christophe.leroy@...roup.eu>
Cc: Jaroslav Kysela <perex@...ex.cz>, Takashi Iwai <tiwai@...e.com>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-sound@...r.kernel.org, Herve Codina <herve.codina@...tlin.com>, Mark
Brown <broonie@...nel.org>
Subject: Re: [PATCH] ALSA: pcm: Convert multiple {get/put}_user to
user_access_begin/user_access_end()
On Wed, 11 Jun 2025 15:48:30 +0200
Christophe Leroy <christophe.leroy@...roup.eu> wrote:
> Le 10/06/2025 à 21:53, David Laight a écrit :
> > On Sat, 7 Jun 2025 13:37:42 +0200
> > Christophe Leroy <christophe.leroy@...roup.eu> wrote:
> >
> >> With user access protection (Called SMAP on x86 or KUAP on powerpc)
> >> each and every call to get_user() or put_user() performs heavy
> >> operations to unlock and lock kernel access to userspace.
> >>
> >> To avoid that, perform user accesses by blocks using
> >> user_access_begin/user_access_end() and unsafe_get_user()/
> >> unsafe_put_user() and alike.
> >
> > Did you consider using masked_user_access_begin() ?
> > It removes a conditional branch and lfence as well.
>
> Thanks, was not aware of that new function, allthought I remember some
> discussion about masked user access.
>
> Looks like this is specific to x86 at the time being.
I think it is two architectures.
But mostly requires a guard page between user and kernel and 'cmov'
if you want to avoid speculation 'issues' (and 'round tuits').
> I would have
> expected that to be transparent to the consumer. Allthought looking at
> strncpy_from_user() I understand the benefit of keeping it separate.
>
> However is it worth the effort and the ugliness of having to do (copied
> from fs/select.c):
>
> if (can_do_masked_user_access())
> from = masked_user_access_begin(from);
> else if (!user_read_access_begin(from, sizeof(*from)))
> return -EFAULT;
I proposed (uaccess: Simplify code pattern for masked user copies):
+#ifdef masked_user_access_begin
+#define masked_user_read_access_begin(from, size) \
+ ((*(from) = masked_user_access_begin(*(from))), 1)
+#define masked_user_write_access_begin(from, size) \
+ ((*(from) = masked_user_access_begin(*(from))), 1)
+#else
+#define masked_user_read_access_begin(from, size) \
+ user_read_access_begin(*(from), size)
+#define masked_user_write_access_begin(from, size) \
+ user_write_access_begin(*(from), size)
+#endif
Which allows the simple change
- if (!user_read_access_begin(from, sizeof(*from)))
+ if (!masked_user_read_access_begin(&from, sizeof(*from)))
return -EFAULT;
unsafe_get_user(xxx, &from->xxx, Efault);
But Linus said:
> I really dislike the use of "pass pointer to simple variable you are
> going to change" interfaces which is why I didn't do it this way.
But, in this case, you absolutely need the 'user pointer' updated.
So need to make it hard to code otherwise.
Note that it is best if masked_user_access_begin() returns the base
address of the guard page for kernel addresses (which amd64 now does)
rather than ~0.
Otherwise it is pretty imperative that the first access be to offset 0.
David
>
> In addition I would expect a masked_user_read_access_begin() and a
> masked_write_access_begin(). It looks odd (and would be wrong on
> powerpc) to not be able to differentiate between read and write in the
> begin yet using user_read_access_end() at the end, ref get_sigset_argpack()
>
> Christophe
Powered by blists - more mailing lists