[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251127092226.1439196-14-ardb+git@google.com>
Date: Thu, 27 Nov 2025 10:22:33 +0100
From: Ard Biesheuvel <ardb+git@...gle.com>
To: linux-hardening@...r.kernel.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Ard Biesheuvel <ardb@...nel.org>, Kees Cook <kees@...nel.org>, Ryan Roberts <ryan.roberts@....com>,
Will Deacon <will@...nel.org>, Arnd Bergmann <arnd@...db.de>, Jeremy Linton <jeremy.linton@....com>,
Catalin Marinas <Catalin.Marinas@....com>, Mark Rutland <mark.rutland@....com>,
"Jason A. Donenfeld" <Jason@...c4.com>
Subject: [RFC/RFT PATCH 6/6] randomize_kstack: Use get_random_u8() at entry
for entropy
From: Ard Biesheuvel <ardb@...nel.org>
The randomize kstack code has a couple of sharp edges, which are due to
the fact that a counter was considered more suitable for providing
entropy than the kernel's RNG.
Given that kstack randomization only requires 6-8 bits of entropy,
get_random_u8() is more suitable here, and its fast path has been made
lockless so that there is no overhead beyond a preempt_dis/enable and
a local 64-bit cmpxchg() to update the RNG's internal state.
This means there is no need to defer sampling the counter until syscall
exit, which also removes the need for a per-CPU variable to carry that
value over to the next syscall on the same CPU. That works around some
issues identified by Ryan in [0].
So replace the use of the per-CPU variable in add_random_kstack_offset()
with a direct use of get_random_u8() for all users, and turn
choose_kstack_random_offset() into an empty stub so that each arch can
get rid of it individually.
[0] https://lore.kernel.org/all/dd8c37bc-795f-4c7a-9086-69e584d8ab24@arm.com/
Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
---
arch/Kconfig | 9 +++--
include/linux/randomize_kstack.h | 36 +++-----------------
init/main.c | 1 -
3 files changed, 8 insertions(+), 38 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 61130b88964b..baea154f833e 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1520,12 +1520,11 @@ config HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
help
An arch should select this symbol if it can support kernel stack
offset randomization with calls to add_random_kstack_offset()
- during syscall entry and choose_random_kstack_offset() during
- syscall exit. Careful removal of -fstack-protector-strong and
+ during syscall entry. Careful removal of -fstack-protector-strong and
-fstack-protector should also be applied to the entry code and
- closely examined, as the artificial stack bump looks like an array
- to the compiler, so it will attempt to add canary checks regardless
- of the static branch state.
+ closely examined, as the artificial stack bump looks like an array to
+ the compiler, so it will attempt to add canary checks regardless of
+ the static branch state.
config RANDOMIZE_KSTACK_OFFSET
bool "Support for randomizing kernel stack offset on syscall entry" if EXPERT
diff --git a/include/linux/randomize_kstack.h b/include/linux/randomize_kstack.h
index 1d982dbdd0d0..db1f056e61ec 100644
--- a/include/linux/randomize_kstack.h
+++ b/include/linux/randomize_kstack.h
@@ -9,7 +9,6 @@
DECLARE_STATIC_KEY_MAYBE(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT,
randomize_kstack_offset);
-DECLARE_PER_CPU(u32, kstack_offset);
/*
* Do not use this anywhere else in the kernel. This is used here because
@@ -50,49 +49,22 @@ DECLARE_PER_CPU(u32, kstack_offset);
* add_random_kstack_offset - Increase stack utilization by previously
* chosen random offset
*
- * This should be used in the syscall entry path when interrupts and
- * preempt are disabled, and after user registers have been stored to
- * the stack. For testing the resulting entropy, please see:
+ * This should be used in the syscall entry path after user registers have been
+ * stored to the stack. For testing the resulting entropy, please see:
* tools/testing/selftests/lkdtm/stack-entropy.sh
*/
#define add_random_kstack_offset() do { \
if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \
&randomize_kstack_offset)) { \
- u32 offset = raw_cpu_read(kstack_offset); \
+ u32 offset = get_random_u8() << 2; \
u8 *ptr = __kstack_alloca(KSTACK_OFFSET_MAX(offset)); \
/* Keep allocation even after "ptr" loses scope. */ \
asm volatile("" :: "r"(ptr) : "memory"); \
} \
} while (0)
-
-/**
- * choose_random_kstack_offset - Choose the random offset for the next
- * add_random_kstack_offset()
- *
- * This should only be used during syscall exit when interrupts and
- * preempt are disabled. This position in the syscall flow is done to
- * frustrate attacks from userspace attempting to learn the next offset:
- * - Maximize the timing uncertainty visible from userspace: if the
- * offset is chosen at syscall entry, userspace has much more control
- * over the timing between choosing offsets. "How long will we be in
- * kernel mode?" tends to be more difficult to predict than "how long
- * will we be in user mode?"
- * - Reduce the lifetime of the new offset sitting in memory during
- * kernel mode execution. Exposure of "thread-local" memory content
- * (e.g. current, percpu, etc) tends to be easier than arbitrary
- * location memory exposure.
- */
-#define choose_random_kstack_offset(rand) do { \
- if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \
- &randomize_kstack_offset)) { \
- u32 offset = raw_cpu_read(kstack_offset); \
- offset = ror32(offset, 5) ^ (rand); \
- raw_cpu_write(kstack_offset, offset); \
- } \
-} while (0)
#else /* CONFIG_RANDOMIZE_KSTACK_OFFSET */
#define add_random_kstack_offset() do { } while (0)
-#define choose_random_kstack_offset(rand) do { } while (0)
#endif /* CONFIG_RANDOMIZE_KSTACK_OFFSET */
+#define choose_random_kstack_offset(rand) do { } while (0)
#endif
diff --git a/init/main.c b/init/main.c
index 07a3116811c5..048a62538242 100644
--- a/init/main.c
+++ b/init/main.c
@@ -830,7 +830,6 @@ static inline void initcall_debug_enable(void)
#ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET
DEFINE_STATIC_KEY_MAYBE_RO(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT,
randomize_kstack_offset);
-DEFINE_PER_CPU(u32, kstack_offset);
static int __init early_randomize_kstack_offset(char *buf)
{
--
2.52.0.107.ga0afd4fd5b-goog
Powered by blists - more mailing lists