lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241030101803.2037606-18-ardb+git@google.com>
Date: Wed, 30 Oct 2024 11:18:12 +0100
From: Ard Biesheuvel <ardb+git@...gle.com>
To: linux-arm-kernel@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org, Ard Biesheuvel <ardb@...nel.org>, 
	Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>, 
	Mark Rutland <mark.rutland@....com>, Ryan Roberts <ryan.roberts@....com>, 
	Anshuman Khandual <anshuman.khandual@....com>, Kees Cook <keescook@...omium.org>
Subject: [RFC PATCH 8/8] arm64/mm: Account for reduced VA sizes in T0SZ and
 skip the levels

From: Ard Biesheuvel <ardb@...nel.org>

Now that a smaller value for TASK_SIZE is used when running with a
reduced virtual address space for userland, it is guaranteed that only
the first entry of each root level page table is populated. This means
that we can reduce the number of levels of translation performed by the
MMU by programming this entry into TTBR0_EL1 directly, and updating T0SZ
accordingly.

This is a quick and dirty hack, but should reap all the benefits in
terms of MMU performance and reduced TLB pressure, at the cost of one
wasted page per process (or 2 on 52-bit VA capable hardware).

Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
---
 arch/arm64/include/asm/mmu_context.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 48b3d9553b67..99777da39228 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -57,7 +57,13 @@ void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
 
 static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
 {
+	int advance = (vabits_actual - CONFIG_TASK_SIZE_BITS) / (PAGE_SHIFT - 3);
+
 	BUG_ON(pgd == swapper_pg_dir);
+
+	while (advance-- > 0)
+		pgd = __va(__pgd_to_phys(*pgd));
+
 	cpu_do_switch_mm(virt_to_phys(pgd),mm);
 }
 
@@ -82,7 +88,8 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
 	isb();
 }
 
-#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(vabits_actual))
+#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(MIN(vabits_actual, \
+									CONFIG_TASK_SIZE_BITS)))
 #define cpu_set_idmap_tcr_t0sz()	__cpu_set_tcr_t0sz(idmap_t0sz)
 
 /*
-- 
2.47.0.163.g1226f6d8fa-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ