lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241119163035.758732080@infradead.org>
Date: Tue, 19 Nov 2024 17:25:32 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: x86@...nel.org, "To:riel"@surriel.com
Cc: linux-kernel@...r.kernel.org,
 peterz@...radead.org,
 Andy Lutomirski <luto@...nel.org>
Subject: [PATCH 5/7] x86/mm: Allow temporary mms when IRQs are on

From: Andy Lutomirski <luto@...nel.org>

EFI runtime services should use temporary mms, but EFI runtime services
want IRQs on.  Preemption must still be disabled in a temporary mm context.

At some point, the entirely temporary mm mechanism should be moved out of
arch code.

Signed-off-by: Andy Lutomirski <luto@...nel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/a8a92ce490b57447ef56898c55133473e481896e.1641659630.git.luto@kernel.org
---
 arch/x86/mm/tlb.c |   19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -679,18 +679,23 @@ void enter_lazy_tlb(struct mm_struct *mm
  * that override the kernel memory protections (e.g., W^X), without exposing the
  * temporary page-table mappings that are required for these write operations to
  * other CPUs. Using a temporary mm also allows to avoid TLB shootdowns when the
- * mapping is torn down.
+ * mapping is torn down.  Temporary mms can also be used for EFI runtime service
+ * calls or similar functionality.
  *
- * Context: The temporary mm needs to be used exclusively by a single core. To
- *          harden security IRQs must be disabled while the temporary mm is
- *          loaded, thereby preventing interrupt handler bugs from overriding
- *          the kernel memory protection.
+ * It is illegal to schedule while using a temporary mm -- the context switch
+ * code is unaware of the temporary mm and does not know how to context switch.
+ * Use a real (non-temporary) mm in a kernel thread if you need to sleep.
+ *
+ * Note: For sensitive memory writes, the temporary mm needs to be used
+ *       exclusively by a single core, and IRQs should be disabled while the
+ *       temporary mm is loaded, thereby preventing interrupt handler bugs from
+ *       overriding the kernel memory protection.
  */
 temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
 {
 	temp_mm_state_t temp_state;
 
-	lockdep_assert_irqs_disabled();
+	lockdep_assert_preemption_disabled();
 
 	/*
 	 * Make sure not to be in TLB lazy mode, as otherwise we'll end up
@@ -722,7 +727,7 @@ temp_mm_state_t use_temporary_mm(struct
 
 void unuse_temporary_mm(temp_mm_state_t prev_state)
 {
-	lockdep_assert_irqs_disabled();
+	lockdep_assert_preemption_disabled();
 
 	/* Clear the cpumask, to indicate no TLB flushing is needed anywhere */
 	cpumask_clear_cpu(smp_processor_id(),



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ