lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <ab08a6a1f4d1873eb09d5ad625c42a51d29e5971.1746874095.git.kai.huang@intel.com> Date: Sat, 10 May 2025 11:20:06 +0000 From: Kai Huang <kai.huang@...el.com> To: dave.hansen@...el.com, bp@...en8.de, tglx@...utronix.de, peterz@...radead.org, mingo@...hat.com Cc: kirill.shutemov@...ux.intel.com, hpa@...or.com, x86@...nel.org, linux-kernel@...r.kernel.org, pbonzini@...hat.com, seanjc@...gle.com, rick.p.edgecombe@...el.com, isaku.yamahata@...el.com, reinette.chatre@...el.com, dan.j.williams@...el.com, thomas.lendacky@....com, ashish.kalra@....com, nik.borisov@...e.com, sagis@...gle.com Subject: [PATCH v2 2/5] x86/virt/tdx: Mark memory cache state incoherent when making SEAMCALL On TDX platforms, at hardware level dirty cachelines with and without TDX keyID can coexist, and CPU can flush them back to memory in random order. During kexec, the caches must be flushed before jumping to the new kernel to avoid silent memory corruption when a cacheline with a different encryption property is written back over whatever encryption properties the new kernel is using. A percpu boolean is used to mark whether the cache of a given CPU may be in an incoherent state, and the kexec performs WBINVD on the CPUs with that boolean turned on. For TDX, only the TDX module or the TDX guests can generate dirty cachelines of TDX private memory, i.e., they are only generated when the kernel does SEAMCALL. Turn on that boolean when the kernel does SEAMCALL so that kexec can correctly flush cache. Note not all SEAMCALL leaf functions generate dirty cachelines of TDX private memory, but for simplicity, just treat all of them do. SEAMCALL can be made from both task context and IRQ disabled context. Given SEAMCALL is just a lengthy instruction (e.g., thousands of cycles) from kernel's point of view and preempt_{disable|enable}() is cheap compared to it, simply unconditionally disable preemption during setting the percpu boolean and making SEAMCALL. Signed-off-by: Kai Huang <kai.huang@...el.com> --- arch/x86/include/asm/tdx.h | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 4a1922ec80cf..d017e48958cd 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -97,9 +97,38 @@ u64 __seamcall_saved_ret(u64 fn, struct tdx_module_args *args); void tdx_init(void); #include <asm/archrandom.h> +#include <asm/processor.h> typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args); +static inline u64 do_seamcall(sc_func_t func, u64 fn, + struct tdx_module_args *args) +{ + u64 ret; + + preempt_disable(); + + /* + * SEAMCALLs are made to the TDX module and can generate dirty + * cachelines of TDX private memory. Mark cache state incoherent + * so that the cache can be flushed during kexec. + * + * Not all SEAMCALL leaf functions generate dirty cachelines + * but for simplicity just treat all of them do. + * + * This needs to be done before actually making the SEAMCALL, + * because kexec-ing CPU could send NMI to stop remote CPUs, + * in which case even disabling IRQ won't help here. + */ + this_cpu_write(cache_state_incoherent, true); + + ret = func(fn, args); + + preempt_enable(); + + return ret; +} + static inline u64 sc_retry(sc_func_t func, u64 fn, struct tdx_module_args *args) { @@ -107,7 +136,7 @@ static inline u64 sc_retry(sc_func_t func, u64 fn, u64 ret; do { - ret = func(fn, args); + ret = do_seamcall(func, fn, args); } while (ret == TDX_RND_NO_ENTROPY && --retry); return ret; -- 2.43.0
Powered by blists - more mailing lists