[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250602193918.868962-4-cleger@rivosinc.com>
Date: Mon, 2 Jun 2025 21:39:16 +0200
From: Clément Léger <cleger@...osinc.com>
To: linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org
Cc: Clément Léger <cleger@...osinc.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Alexandre Ghiti <alex@...ti.fr>,
"Maciej W . Rozycki" <macro@...am.me.uk>,
David Laight <david.laight.linux@...il.com>
Subject: [PATCH v2 3/3] riscv: uaccess: do not do misaligned accesses in get/put_user()
Doing misaligned access to userspace memory would make a trap on
platform where it is emulated. Latest fixes removed the kernel
capability to do unaligned accesses to userspace memory safely since
interrupts are kept disabled at all time during that. Thus doing so
would crash the kernel.
Such behavior was detected with GET_UNALIGN_CTL() that was doing
a put_user() with an unsigned long* address that should have been an
unsigned int*. Reenabling kernel misaligned access emulation is a bit
risky and it would also degrade performances. Rather than doing that,
we will try to avoid any misaligned accessed by using copy_from/to_user()
which does not do any misaligned accesses. This can be done only for
!CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS and thus allows to only generate
a bit more code for this config.
Signed-off-by: Clément Léger <cleger@...osinc.com>
---
arch/riscv/include/asm/uaccess.h | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)
diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
index 046de7ced09c..d472da4450e6 100644
--- a/arch/riscv/include/asm/uaccess.h
+++ b/arch/riscv/include/asm/uaccess.h
@@ -169,8 +169,19 @@ do { \
#endif /* CONFIG_64BIT */
+unsigned long __must_check __asm_copy_to_user_sum_enabled(void __user *to,
+ const void *from, unsigned long n);
+unsigned long __must_check __asm_copy_from_user_sum_enabled(void *to,
+ const void __user *from, unsigned long n);
+
#define __get_user_nocheck(x, __gu_ptr, label) \
do { \
+ if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \
+ !IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) { \
+ if (__asm_copy_from_user_sum_enabled(&(x), __gu_ptr, sizeof(*__gu_ptr))) \
+ goto label; \
+ break; \
+ } \
switch (sizeof(*__gu_ptr)) { \
case 1: \
__get_user_asm("lb", (x), __gu_ptr, label); \
@@ -297,6 +308,13 @@ do { \
#define __put_user_nocheck(x, __gu_ptr, label) \
do { \
+ if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \
+ !IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) { \
+ __inttype(x) val = (__inttype(x))x; \
+ if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(val), sizeof(*__gu_ptr))) \
+ goto label; \
+ break; \
+ } \
switch (sizeof(*__gu_ptr)) { \
case 1: \
__put_user_asm("sb", (x), __gu_ptr, label); \
@@ -450,11 +468,6 @@ static inline void user_access_restore(unsigned long enabled) { }
(x) = (__force __typeof__(*(ptr)))__gu_val; \
} while (0)
-unsigned long __must_check __asm_copy_to_user_sum_enabled(void __user *to,
- const void *from, unsigned long n);
-unsigned long __must_check __asm_copy_from_user_sum_enabled(void *to,
- const void __user *from, unsigned long n);
-
#define unsafe_copy_to_user(_dst, _src, _len, label) \
if (__asm_copy_to_user_sum_enabled(_dst, _src, _len)) \
goto label;
--
2.49.0
Powered by blists - more mailing lists