lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 13 Jun 2022 16:45:32 +0200 From: Ard Biesheuvel <ardb@...nel.org> To: linux-arm-kernel@...ts.infradead.org Cc: linux-hardening@...r.kernel.org, Ard Biesheuvel <ardb@...nel.org>, Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>, Kees Cook <keescook@...omium.org>, Catalin Marinas <catalin.marinas@....com>, Mark Brown <broonie@...nel.org>, Anshuman Khandual <anshuman.khandual@....com> Subject: [PATCH v4 08/26] arm64: kernel: drop unnecessary PoC cache clean+invalidate Some early boot code runs before the virtual placement of the kernel is finalized, and we used to go back to the very start and recreate the ID map along with the page tables describing the virtual kernel mapping, and this involved setting some global variables with the caches off. In order to ensure that global state created by the KASLR code is not corrupted by the cache invalidation that occurs in that case, we needed to clean those global variables to the PoC explicitly. This is no longer needed now that the ID map is created only once (and the associated global variable updates are no longer repeated). So drop the cache maintenance that is no longer necessary. Signed-off-by: Ard Biesheuvel <ardb@...nel.org> --- arch/arm64/kernel/kaslr.c | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 418b2bba1521..d5542666182f 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -13,7 +13,6 @@ #include <linux/pgtable.h> #include <linux/random.h> -#include <asm/cacheflush.h> #include <asm/fixmap.h> #include <asm/kernel-pgtable.h> #include <asm/memory.h> @@ -72,9 +71,6 @@ u64 __init kaslr_early_init(void) * we end up running with module randomization disabled. */ module_alloc_base = (u64)_etext - MODULES_VSIZE; - dcache_clean_inval_poc((unsigned long)&module_alloc_base, - (unsigned long)&module_alloc_base + - sizeof(module_alloc_base)); /* * Try to map the FDT early. If this fails, we simply bail, @@ -174,13 +170,6 @@ u64 __init kaslr_early_init(void) module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21; module_alloc_base &= PAGE_MASK; - dcache_clean_inval_poc((unsigned long)&module_alloc_base, - (unsigned long)&module_alloc_base + - sizeof(module_alloc_base)); - dcache_clean_inval_poc((unsigned long)&memstart_offset_seed, - (unsigned long)&memstart_offset_seed + - sizeof(memstart_offset_seed)); - return offset; } -- 2.30.2
Powered by blists - more mailing lists