[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <295dceed7bef2391f86e751b9bfc9a34347062e4.1673470326.git.david.keisarschm@mail.huji.ac.il>
Date: Fri, 13 Jan 2023 23:39:45 +0200
From: david.keisarschm@...l.huji.ac.il
To: linux-kernel@...r.kernel.org,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>
Cc: Jason@...c4.com, keescook@...omium.org,
David Keisar Schmidt <david.keisarschm@...l.huji.ac.il>,
aksecurity@...il.com, ilay.bahat1@...il.com
Subject: [PATCH v4 3/3] x86 mm, x86 architecture (32-bit and 64-bit): arch/x86/mm/kaslr.c: Adds 64bits version of prandom_seed_state
From: David Keisar Schmidt <david.keisarschm@...l.huji.ac.il>
The memory randomization of the virtual address space of kernel memory regions
(physical memory mapping, vmalloc & vmemmap) inside arch/x86/mm/kaslr.c
is based on the function prandom_bytes_state which uses the prandom_u32 PRNG.
However, the seeding here is done by calling prandom_seed_state,
which effectively uses only 32bits of the seed, which means that observing ONE
region's offset (say 30 bits) can provide the attacker with 2 possible seeds
(from which the attacker can calculate the remaining two regions)
Hence, we implemented an adjusted version of prandom_seed_state, inside kaslr.c
so it takes advantage of all the seed's 64 bits. With this implementation,
enumerating over the seed is quite unfeasible, and attacking the linearity requires ~113
bits which we don't get with two exposed region offsets (but rather up to 30
bits each).
Signed-off-by: David Keisar Schmidt <david.keisarschm@...l.huji.ac.il>
---
Changes since v3:
* We took a different approach, and replaced the invocation of
prandom_bytes_state, to a revised version which is more secure.
Changes since v2:
* edited commit message.
arch/x86/mm/kaslr.c | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 557f0fe25..5fd73d5ad 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -59,6 +59,29 @@ static inline unsigned long get_padding(struct kaslr_memory_region *region)
{
return (region->size_tb << TB_SHIFT);
}
+static inline void _kaslr_prandom_seed_state(struct rnd_state *state, u64 seed)
+{
+ u32 i = ((seed >> 32) ^ (seed << 10) ^ seed) & 0xffffffffUL;
+ // To take advantage of all 64 bits of the seed
+ u32 j = ((seed>>32) ^ (seed<<10)) & 0xffffffffUL;
+ state->s1 = __seed(i, 2U);
+ state->s2 = __seed(j, 8U);
+ /* Ensure no obvious linear relation with the previous states */
+ state->s3 = __seed(next_pseudo_random32(i+j), 16U);
+ state->s4 = __seed(next_pseudo_random32(j-((i>>16)^(i<<16))), 128U);
+
+ /* Calling RNG ten times to satisfy recurrence condition */
+ prandom_u32_state(state);
+ prandom_u32_state(state);
+ prandom_u32_state(state);
+ prandom_u32_state(state);
+ prandom_u32_state(state);
+ prandom_u32_state(state);
+ prandom_u32_state(state);
+ prandom_u32_state(state);
+ prandom_u32_state(state);
+ prandom_u32_state(state);
+}
/* Initialize base and padding for each memory region randomized with KASLR */
void __init kernel_randomize_memory(void)
@@ -113,7 +136,8 @@ void __init kernel_randomize_memory(void)
for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
remain_entropy -= get_padding(&kaslr_regions[i]);
- prandom_seed_state(&rand_state, kaslr_get_random_long("Memory"));
+ _kaslr_prandom_seed_state(&rand_state, kaslr_get_random_long
+ ("Memory"));
for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++) {
unsigned long entropy;
--
2.38.0
Powered by blists - more mailing lists