[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <e3db947d341ae544b236fa2e547fd98c63cc9626.1647167475.git.kai.huang@intel.com>
Date: Sun, 13 Mar 2022 23:49:59 +1300
From: Kai Huang <kai.huang@...el.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: dave.hansen@...el.com, seanjc@...gle.com, pbonzini@...hat.com,
kirill.shutemov@...ux.intel.com,
sathyanarayanan.kuppuswamy@...ux.intel.com, peterz@...radead.org,
tony.luck@...el.com, ak@...ux.intel.com, dan.j.williams@...el.com,
isaku.yamahata@...el.com, kai.huang@...el.com
Subject: [PATCH v2 19/21] x86: Flush cache of TDX private memory during kexec()
If TDX is ever enabled and/or used to run any TD guests, the cachelines
of TDX private memory, including PAMTs, used by TDX module need to be
flushed before transiting to the new kernel otherwise they may silently
corrupt the new kernel.
TDX module can only be initialized once during its lifetime. TDX does
not have interface to reset TDX module to an uninitialized state so it
could be initialized again. If the old kernel has enabled TDX, the new
kernel won't be able to use TDX again. Therefore, ideally the old
kernel should shut down the TDX module if it is ever initialized so that
no SEAMCALLs can be made to it again.
However, shutting down the TDX module requires calling SEAMCALL, which
requires cpu being in VMX operation (VMXON has been done). Currently,
only KVM does entering/leaving VMX operation, so there's no guarantee
that all cpus are in VMX operation during kexec(). Therefore, this
implementation doesn't shut down the TDX module, but only does cache
flush and just leave the TDX module open.
And it's fine to leave the module open. If the new kernel wants to use
TDX, it needs to go through the initialization process, and it will fail
at the first SEAMCALL due to the TDX module is not in the uninitialized
state. If the new kernel doesn't want to use TDX, then the TDX module
won't run at all.
Following the implementation of SME support, use wbinvd() to flush cache
in stop_this_cpu(). Introduce a new function platform_has_tdx() to only
check whether the platform is TDX-capable and do wbinvd() when it is
true. platform_has_tdx() returns true when SEAMRR is enabled and there
are enough TDX private KeyIDs to run at least one TD guest (both of
which are detected at boot time). TDX is enabled on demand at runtime
and it has a state machine with mutex to protect multiple callers to
initialize TDX in parallel. Getting TDX module state needs to hold the
mutex but stop_this_cpu() runs in interrupt context, so just check
whether platform supports TDX and flush cache.
Signed-off-by: Kai Huang <kai.huang@...el.com>
---
arch/x86/include/asm/tdx.h | 2 ++
arch/x86/kernel/process.c | 15 ++++++++++++++-
arch/x86/virt/vmx/tdx.c | 14 ++++++++++++++
3 files changed, 30 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index b526d41c4bbf..24f2b7e8b280 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -85,10 +85,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
void tdx_detect_cpu(struct cpuinfo_x86 *c);
int tdx_detect(void);
int tdx_init(void);
+bool platform_has_tdx(void);
#else
static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { }
static inline int tdx_detect(void) { return -ENODEV; }
static inline int tdx_init(void) { return -ENODEV; }
+static inline bool platform_has_tdx(void) { return false; }
#endif /* CONFIG_INTEL_TDX_HOST */
#endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 71aa12082370..bf3d1c9cb00c 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -766,8 +766,21 @@ void stop_this_cpu(void *dummy)
* without the encryption bit, they don't race each other when flushed
* and potentially end up with the wrong entry being committed to
* memory.
+ *
+ * In case of kexec, similar to SME, if TDX is ever enabled, the
+ * cachelines of TDX private memory (including PAMTs) used by TDX
+ * module need to be flushed before transiting to the new kernel,
+ * otherwise they may silently corrupt the new kernel.
+ *
+ * Note TDX is enabled on demand at runtime, and enabling TDX has a
+ * state machine protected with a mutex to prevent concurrent calls
+ * from multiple callers. Holding the mutex is required to get the
+ * TDX enabling status, but this function runs in interrupt context.
+ * So to make it simple, always flush cache when platform supports
+ * TDX (detected at boot time), regardless whether TDX is truly
+ * enabled by kernel.
*/
- if (boot_cpu_has(X86_FEATURE_SME))
+ if (boot_cpu_has(X86_FEATURE_SME) || platform_has_tdx())
native_wbinvd();
for (;;) {
/*
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index f2b9c98191ed..d9ad8dc7111e 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -1681,3 +1681,17 @@ int tdx_init(void)
return ret;
}
EXPORT_SYMBOL_GPL(tdx_init);
+
+/**
+ * platform_has_tdx - Whether platform supports TDX
+ *
+ * Check whether platform supports TDX (i.e. TDX is enabled in BIOS),
+ * regardless whether TDX is truly enabled by kernel.
+ *
+ * Return true if SEAMRR is enabled, and there are sufficient TDX private
+ * KeyIDs to run TD guests.
+ */
+bool platform_has_tdx(void)
+{
+ return seamrr_enabled() && tdx_keyid_sufficient();
+}
--
2.35.1
Powered by blists - more mailing lists