[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250817144943.76b9ee62@pumpkin>
Date: Sun, 17 Aug 2025 14:49:43 +0100
From: David Laight <david.laight.linux@...il.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>, Linus Torvalds
<torvalds@...ux-foundation.org>, Mathieu Desnoyers
<mathieu.desnoyers@...icios.com>, Peter Zijlstra <peterz@...radead.org>,
Darren Hart <dvhart@...radead.org>, Davidlohr Bueso <dave@...olabs.net>,
André Almeida <andrealmeid@...lia.com>, x86@...nel.org,
Alexander Viro <viro@...iv.linux.org.uk>, Christian Brauner
<brauner@...nel.org>, Jan Kara <jack@...e.cz>,
linux-fsdevel@...r.kernel.org
Subject: Re: [patch 0/4] uaccess: Provide and use helpers for user masked
access
On Wed, 13 Aug 2025 17:57:00 +0200 (CEST)
Thomas Gleixner <tglx@...utronix.de> wrote:
> commit 2865baf54077 ("x86: support user address masking instead of
> non-speculative conditional") provided an optimization for
> unsafe_get/put_user(), which optimizes the Spectre-V1 mitigation in an
> architecture specific way. Currently only x86_64 supports that.
>
> The required code pattern screams for helper functions before it is copied
> all over the kernel. So far the exposure is limited to futex, x86 and
> fs/select.
>
> Provide a set of helpers for common single size access patterns:
(gmail hasn't decided to accept 1/4 yet - I need to find a better
mail relay...)
+/*
+ * Conveniance macros to avoid spreading this pattern all over the place
^ spelling...
+ */
+#define user_read_masked_begin(src) ({ \
+ bool __ret = true; \
+ \
+ if (can_do_masked_user_access()) \
+ src = masked_user_access_begin(src); \
+ else if (!user_read_access_begin(src, sizeof(*src))) \
+ __ret = false; \
+ __ret; \
+})
I proposed something very similar a while back.
Since it updated 'src' it really ought to be passed by address.
For the general case you also need the a parameter for the size.
Linus didn't like it, but I've forgotten why.
I'm also not convinced of the name.
There isn't any 'masking' involved, so it shouldn't be propagated.
There is also an implementation issue.
The original masker_user_access_begin() returned ~0 for kernel addresses.
That requires that the code always access offset zero first.
I looked up some candidates for this code and found one (possibly epoll)
that did the accesses in the wrong order.
The current x86-64 'cmp+cmov' version returns the base of the guard page,
so is safe provided the accesses are 'reasonably sequential'.
That probably ought to be a requirement.
David
Powered by blists - more mailing lists