lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ebe4c0e8fe1b78c014bbc18832ae0eec8fef419d.1741778537.git.kai.huang@intel.com>
Date: Thu, 13 Mar 2025 00:34:14 +1300
From: Kai Huang <kai.huang@...el.com>
To: dave.hansen@...el.com,
	bp@...en8.de,
	tglx@...utronix.de,
	peterz@...radead.org,
	mingo@...hat.com,
	kirill.shutemov@...ux.intel.com
Cc: hpa@...or.com,
	x86@...nel.org,
	linux-kernel@...r.kernel.org,
	pbonzini@...hat.com,
	seanjc@...gle.com,
	rick.p.edgecombe@...el.com,
	reinette.chatre@...el.com,
	isaku.yamahata@...el.com,
	dan.j.williams@...el.com,
	thomas.lendacky@....com,
	ashish.kalra@....com,
	dwmw@...zon.co.uk,
	bhe@...hat.com,
	nik.borisov@...e.com,
	sagis@...gle.com,
	Dave Young <dyoung@...hat.com>,
	David Kaplan <david.kaplan@....com>
Subject: [RFC PATCH 2/5] x86/kexec: Do unconditional WBINVD for bare-metal in relocate_kernel()

For both SME and TDX, dirty cachelines with and without the encryption
bit(s) of the same physical memory address can coexist and the CPU can
flush them back to memory in random order.  During kexec, the caches
must be flushed before jumping to the new kernel to avoid silent memory
corruption to the new kernel.

The WBINVD in stop_this_cpu() flushes caches for all remote CPUs when
they are being stopped.  For SME, the WBINVD in relocate_kernel()
flushes the cache for the last running CPU (which is doing kexec).

Similarly, to support kexec for TDX host, after stopping all remote CPUs
with cache flushed, the kernel needs to flush cache for the last running
CPU.

Use the existing WBINVD in relocate_kernel() to cover TDX host as well.

Just do unconditional WBINVD to cover both SME and TDX instead of
sprinkling additional vendor-specific checks.  Kexec is a slow path, and
the additional WBINVD is acceptable for the sake of simplicity and
maintainability.

But only do WBINVD for bare-metal because TDX guests and SEV-ES/SEV-SNP
guests will get unexpected (and yet unnecessary) exception (#VE or #VC)
which the kernel is unable to handle at the time of relocate_kernel()
since the kernel has torn down the IDT.

Remove the host_mem_enc_active local variable and directly use
!cpu_feature_enabled(X86_FEATURE_HYPERVISOR) as an argument of calling
relocate_kernel().  cpu_feature_enabled() is always inline but not a
function call, thus it is safe to use after load_segments() when call
depth tracking is enabled.

Signed-off-by: Kai Huang <kai.huang@...el.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: Tom Lendacky <thomas.lendacky@....com>
Cc: Dave Young <dyoung@...hat.com>
Cc: David Kaplan <david.kaplan@....com>
Reviewed-by: Tom Lendacky <thomas.lendacky@....com>
Tested-by: David Kaplan <david.kaplan@....com>
---
 arch/x86/include/asm/kexec.h         |  2 +-
 arch/x86/kernel/machine_kexec_64.c   | 14 ++++++--------
 arch/x86/kernel/relocate_kernel_64.S | 15 ++++++++++-----
 3 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 8ad187462b68..48c313575262 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -123,7 +123,7 @@ relocate_kernel_fn(unsigned long indirection_page,
 		   unsigned long pa_control_page,
 		   unsigned long start_address,
 		   unsigned int preserve_context,
-		   unsigned int host_mem_enc_active);
+		   unsigned int bare_metal);
 #endif
 extern relocate_kernel_fn relocate_kernel;
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index a68f5a0a9f37..0e9808eeb63e 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -346,16 +346,9 @@ void __nocfi machine_kexec(struct kimage *image)
 {
 	unsigned long reloc_start = (unsigned long)__relocate_kernel_start;
 	relocate_kernel_fn *relocate_kernel_ptr;
-	unsigned int host_mem_enc_active;
 	int save_ftrace_enabled;
 	void *control_page;
 
-	/*
-	 * This must be done before load_segments() since if call depth tracking
-	 * is used then GS must be valid to make any function calls.
-	 */
-	host_mem_enc_active = cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT);
-
 #ifdef CONFIG_KEXEC_JUMP
 	if (image->preserve_context)
 		save_processor_state();
@@ -398,6 +391,11 @@ void __nocfi machine_kexec(struct kimage *image)
 	 *
 	 * I take advantage of this here by force loading the
 	 * segments, before I zap the gdt with an invalid value.
+	 *
+	 * load_segments() resets GS to 0.  Don't make any function call
+	 * after here since call depth tracking uses per-CPU variables to
+	 * operate (relocate_kernel() is explicitly ignored by call depth
+	 * tracking).
 	 */
 	load_segments();
 	/*
@@ -412,7 +410,7 @@ void __nocfi machine_kexec(struct kimage *image)
 					   virt_to_phys(control_page),
 					   image->start,
 					   image->preserve_context,
-					   host_mem_enc_active);
+					   !cpu_feature_enabled(X86_FEATURE_HYPERVISOR));
 
 #ifdef CONFIG_KEXEC_JUMP
 	if (image->preserve_context)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index b44d8863e57f..dc1a59cd8139 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -50,7 +50,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
 	 * %rsi pa_control_page
 	 * %rdx start address
 	 * %rcx preserve_context
-	 * %r8  host_mem_enc_active
+	 * %r8  bare_metal
 	 */
 
 	/* Save the CPU context, used for jumping back */
@@ -107,7 +107,7 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
 	/*
 	 * %rdi	indirection page
 	 * %rdx start address
-	 * %r8 host_mem_enc_active
+	 * %r8 bare_metal
 	 * %r9 page table page
 	 * %r11 preserve_context
 	 * %r13 original CR4 when relocate_kernel() was invoked
@@ -156,14 +156,19 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
 	movq	%r9, %cr3
 
 	/*
-	 * If SME is active, there could be old encrypted cache line
+	 * If SME/TDX is active, there could be old encrypted cache line
 	 * entries that will conflict with the now unencrypted memory
 	 * used by kexec. Flush the caches before copying the kernel.
+	 *
+	 * Do WBINVD for bare-metal only to cover both SME and TDX. Doing
+	 * WBINVD in guest results in an unexpected exception (#VE or #VC)
+	 * for TDX and SEV-ES/SNP guests which then crashes the guest (the
+	 * kernel has torn down the IDT).
 	 */
 	testq	%r8, %r8
-	jz .Lsme_off
+	jz .Lno_wbinvd
 	wbinvd
-.Lsme_off:
+.Lno_wbinvd:
 
 	call	swap_pages
 
-- 
2.48.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ