[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <d1244b4548b99c07f3ef0623f548963373c451d0.1676286526.git.kai.huang@intel.com>
Date: Tue, 14 Feb 2023 00:59:24 +1300
From: Kai Huang <kai.huang@...el.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: linux-mm@...ck.org, dave.hansen@...el.com, peterz@...radead.org,
tglx@...utronix.de, seanjc@...gle.com, pbonzini@...hat.com,
dan.j.williams@...el.com, rafael.j.wysocki@...el.com,
kirill.shutemov@...ux.intel.com, ying.huang@...el.com,
reinette.chatre@...el.com, len.brown@...el.com,
tony.luck@...el.com, ak@...ux.intel.com, isaku.yamahata@...el.com,
chao.gao@...el.com, sathyanarayanan.kuppuswamy@...ux.intel.com,
david@...hat.com, bagasdotme@...il.com, sagis@...gle.com,
imammedo@...hat.com, kai.huang@...el.com
Subject: [PATCH v9 17/18] x86/virt/tdx: Flush cache in kexec() when TDX is enabled
There are two problems in terms of using kexec() to boot to a new kernel
when the old kernel has enabled TDX: 1) Part of the memory pages are
still TDX private pages; 2) There might be dirty cachelines associated
with TDX private pages.
The first problem doesn't matter. KeyID 0 doesn't have integrity check.
Even the new kernel wants to use any non-zero KeyID, it needs to convert
the memory to that KeyID and such conversion would work from any KeyID.
However the old kernel needs to guarantee there's no dirty cacheline
left behind before booting to the new kernel to avoid silent corruption
from later cacheline writeback (Intel hardware doesn't guarantee cache
coherency across different KeyIDs).
There are two things that the old kernel needs to do to achieve that:
1) Stop accessing TDX private memory mappings:
a. Stop making TDX module SEAMCALLs (TDX global KeyID);
b. Stop TDX guests from running (per-guest TDX KeyID).
2) Flush any cachelines from previous TDX private KeyID writes.
For 2), use wbinvd() to flush cache in stop_this_cpu(), following SME
support. And in this way 1) happens for free as there's no TDX activity
between wbinvd() and the native_halt().
Theoretically, cache flush is only needed when the TDX module has been
initialized. However initializing the TDX module is done on demand at
runtime, and it takes a mutex to read the module status. Just check
whether TDX is enabled by the BIOS instead to flush cache.
Signed-off-by: Kai Huang <kai.huang@...el.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@...el.com>
---
v8 -> v9:
- Various changelog enhancement and fix (Dave).
- Improved comment (Dave).
v7 -> v8:
- Changelog:
- Removed "leave TDX module open" part due to shut down patch has been
removed.
v6 -> v7:
- Improved changelog to explain why don't convert TDX private pages back
to normal.
---
arch/x86/kernel/process.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 40d156a31676..5876dda412c7 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -765,8 +765,13 @@ void __noreturn stop_this_cpu(void *dummy)
*
* Test the CPUID bit directly because the machine might've cleared
* X86_FEATURE_SME due to cmdline options.
+ *
+ * The TDX module or guests might have left dirty cachelines
+ * behind. Flush them to avoid corruption from later writeback.
+ * Note that this flushes on all systems where TDX is possible,
+ * but does not actually check that TDX was in use.
*/
- if (cpuid_eax(0x8000001f) & BIT(0))
+ if (cpuid_eax(0x8000001f) & BIT(0) || platform_tdx_enabled())
native_wbinvd();
for (;;) {
/*
--
2.39.1
Powered by blists - more mailing lists