lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 19 Mar 2024 01:48:45 +0000
From: Kai Huang <kai.huang@...el.com>
To: linux-kernel@...r.kernel.org
Cc: x86@...nel.org,
	dave.hansen@...el.com,
	bp@...en8.de,
	kirill.shutemov@...ux.intel.com,
	tglx@...utronix.de,
	mingo@...hat.com,
	hpa@...or.com,
	luto@...nel.org,
	peterz@...radead.org,
	rick.p.edgecombe@...el.com,
	thomas.lendacky@....com,
	ashish.kalra@....com,
	chao.gao@...el.com,
	bhe@...hat.com,
	nik.borisov@...e.com,
	pbonzini@...hat.com,
	seanjc@...gle.com
Subject: [PATCH v2 2/5] x86/kexec: do unconditional WBINVD in relocate_kernel()

Both SME and TDX can leave caches in incoherent state due to memory
encryption.  During kexec, the caches must be flushed before jumping to
the second kernel to avoid silent memory corruption to the second kernel.

During kexec, the WBINVD in stop_this_cpu() flushes caches for all
remote cpus when they are being stopped.  For SME, the WBINVD in
relocate_kernel() flushes the cache for the last running cpu (which is
executing the kexec).

Similarly, for TDX after stopping all remote cpus with cache flushed, to
support kexec, the kernel needs to flush cache for the last running cpu.

Make the WBINVD in the relocate_kernel() unconditional to cover both SME
and TDX.

Signed-off-by: Kai Huang <kai.huang@...el.com>
Suggested-by: Borislav Petkov <bp@...en8.de>
Cc: Tom Lendacky <thomas.lendacky@....com>
Cc: Dave Young <dyoung@...hat.com>
---
 arch/x86/include/asm/kexec.h         |  3 +--
 arch/x86/kernel/machine_kexec_64.c   |  3 +--
 arch/x86/kernel/relocate_kernel_64.S | 13 +++----------
 3 files changed, 5 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 91ca9a9ee3a2..9754794242ad 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -127,8 +127,7 @@ unsigned long
 relocate_kernel(unsigned long indirection_page,
 		unsigned long page_list,
 		unsigned long start_address,
-		unsigned int preserve_context,
-		unsigned int host_mem_enc_active);
+		unsigned int preserve_context);
 #endif
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index bc0a5348b4a6..b9a632479b36 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -357,8 +357,7 @@ void machine_kexec(struct kimage *image)
 	image->start = relocate_kernel((unsigned long)image->head,
 				       (unsigned long)page_list,
 				       image->start,
-				       image->preserve_context,
-				       cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT));
+				       image->preserve_context);
 
 #ifdef CONFIG_KEXEC_JUMP
 	if (image->preserve_context)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 56cab1bb25f5..66b628686dbc 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -50,7 +50,6 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
 	 * %rsi page_list
 	 * %rdx start address
 	 * %rcx preserve_context
-	 * %r8  host_mem_enc_active
 	 */
 
 	/* Save the CPU context, used for jumping back */
@@ -78,9 +77,6 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
 	pushq $0
 	popfq
 
-	/* Save SME active flag */
-	movq	%r8, %r12
-
 	/*
 	 * get physical address of control page now
 	 * this is impossible after page table switch
@@ -160,14 +156,11 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
 	movq	%r9, %cr3
 
 	/*
-	 * If SME is active, there could be old encrypted cache line
-	 * entries that will conflict with the now unencrypted memory
-	 * used by kexec. Flush the caches before copying the kernel.
+	 * The kernel could leave caches in incoherent state on SME/TDX
+	 * capable platforms.  Just do unconditional cache flush to avoid
+	 * silent memory corruption to the new kernel for these platforms.
 	 */
-	testq	%r12, %r12
-	jz 1f
 	wbinvd
-1:
 
 	movq	%rcx, %r11
 	call	swap_pages
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ