[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220131105230.151611980@linuxfoundation.org>
Date: Mon, 31 Jan 2022 11:54:30 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Arnd Bergmann <arnd@...db.de>,
Ard Biesheuvel <ardb@...nel.org>,
"Russell King (Oracle)" <rmk+kernel@...linux.org.uk>
Subject: [PATCH 5.15 005/171] ARM: 9179/1: uaccess: avoid alignment faults in copy_[from|to]_kernel_nofault
From: Ard Biesheuvel <ardb@...nel.org>
commit 15420269b02a63ed8c1841905d8b8b2403246004 upstream.
The helpers that are used to implement copy_from_kernel_nofault() and
copy_to_kernel_nofault() cast a void* to a pointer to a wider type,
which may result in alignment faults on ARM if the compiler decides to
use double-word or multiple-word load/store instructions.
Only configurations that define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
are affected, given that commit 2423de2e6f4d ("ARM: 9115/1: mm/maccess:
fix unaligned copy_{from,to}_kernel_nofault") ensures that dst and src
are sufficiently aligned otherwise.
So use the unaligned accessors for accessing dst and src in cases where
they may be misaligned.
Cc: <stable@...r.kernel.org> # depends on 2423de2e6f4d
Fixes: 2df4c9a741a0 ("ARM: 9112/1: uaccess: add __{get,put}_kernel_nofault")
Reviewed-by: Arnd Bergmann <arnd@...db.de>
Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@...linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/arm/include/asm/uaccess.h | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -11,6 +11,7 @@
#include <linux/string.h>
#include <asm/memory.h>
#include <asm/domain.h>
+#include <asm/unaligned.h>
#include <asm/unified.h>
#include <asm/compiler.h>
@@ -497,7 +498,10 @@ do { \
} \
default: __err = __get_user_bad(); break; \
} \
- *(type *)(dst) = __val; \
+ if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) \
+ put_unaligned(__val, (type *)(dst)); \
+ else \
+ *(type *)(dst) = __val; /* aligned by caller */ \
if (__err) \
goto err_label; \
} while (0)
@@ -507,7 +511,9 @@ do { \
const type *__pk_ptr = (dst); \
unsigned long __dst = (unsigned long)__pk_ptr; \
int __err = 0; \
- type __val = *(type *)src; \
+ type __val = IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) \
+ ? get_unaligned((type *)(src)) \
+ : *(type *)(src); /* aligned by caller */ \
switch (sizeof(type)) { \
case 1: __put_user_asm_byte(__val, __dst, __err, ""); break; \
case 2: __put_user_asm_half(__val, __dst, __err, ""); break; \
Powered by blists - more mailing lists