lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250819192823.GLaKTQVxIV4n7p60hU@fat_crate.local>
Date: Tue, 19 Aug 2025 21:28:23 +0200
From: Borislav Petkov <bp@...en8.de>
To: Kai Huang <kai.huang@...el.com>
Cc: dave.hansen@...el.com, tglx@...utronix.de, peterz@...radead.org,
	mingo@...hat.com, hpa@...or.com, thomas.lendacky@....com,
	x86@...nel.org, kas@...nel.org, rick.p.edgecombe@...el.com,
	dwmw@...zon.co.uk, linux-kernel@...r.kernel.org,
	pbonzini@...hat.com, seanjc@...gle.com, kvm@...r.kernel.org,
	reinette.chatre@...el.com, isaku.yamahata@...el.com,
	dan.j.williams@...el.com, ashish.kalra@....com,
	nik.borisov@...e.com, chao.gao@...el.com, sagis@...gle.com,
	farrah.chen@...el.com
Subject: Re: [PATCH v6 2/7] x86/sme: Use percpu boolean to control WBINVD
 during kexec

On Thu, Aug 14, 2025 at 11:59:02AM +1200, Kai Huang wrote:
> TL;DR:
> 
> Prepare to unify how TDX and SME do cache flushing during kexec by
> making a percpu boolean control whether to do the WBINVD.
> 
> -- Background --
> 
> On SME platforms, dirty cacheline aliases with and without encryption
> bit can coexist, and the CPU can flush them back to memory in random
> order.  During kexec, the caches must be flushed before jumping to the
> new kernel otherwise the dirty cachelines could silently corrupt the
> memory used by the new kernel due to different encryption property.
> 
> TDX also needs a cache flush during kexec for the same reason.  It would
> be good to have a generic way to flush the cache instead of scattering
> checks for each feature all around.
> 
> When SME is enabled, the kernel basically encrypts all memory including
> the kernel itself and a simple memory write from the kernel could dirty
> cachelines.  Currently, the kernel uses WBINVD to flush the cache for
> SME during kexec in two places:
> 
> 1) the one in stop_this_cpu() for all remote CPUs when the kexec-ing CPU
>    stops them;
> 2) the one in the relocate_kernel() where the kexec-ing CPU jumps to the
>    new kernel.
> 
> -- Solution --
> 
> Unlike SME, TDX can only dirty cachelines when it is used (i.e., when
> SEAMCALLs are performed).  Since there are no more SEAMCALLs after the
> aforementioned WBINVDs, leverage this for TDX.
> 
> To unify the approach for SME and TDX, use a percpu boolean to indicate
> the cache may be in an incoherent state and needs flushing during kexec,
> and set the boolean for SME.  TDX can then leverage it.
> 
> While SME could use a global flag (since it's enabled at early boot and
> enabled on all CPUs), the percpu flag fits TDX better:
> 
> The percpu flag can be set when a CPU makes a SEAMCALL, and cleared when
> another WBINVD on the CPU obviates the need for a kexec-time WBINVD.
> Saving kexec-time WBINVD is valuable, because there is an existing
> race[*] where kexec could proceed while another CPU is active.  WBINVD
> could make this race worse, so it's worth skipping it when possible.
> 
> -- Side effect to SME --
> 
> Today the first WBINVD in the stop_this_cpu() is performed when SME is
> *supported* by the platform, and the second WBINVD is done in
> relocate_kernel() when SME is *activated* by the kernel.  Make things
> simple by changing to do the second WBINVD when the platform supports
> SME.  This allows the kernel to simply turn on this percpu boolean when
> bringing up a CPU by checking whether the platform supports SME.
> 
> No other functional change intended.
> 
> [*] The aforementioned race:
> 
> During kexec native_stop_other_cpus() is called to stop all remote CPUs
> before jumping to the new kernel.  native_stop_other_cpus() firstly
> sends normal REBOOT vector IPIs to stop remote CPUs and waits them to
> stop.  If that times out, it sends NMI to stop the CPUs that are still
> alive.  The race happens when native_stop_other_cpus() has to send NMIs
> and could potentially result in the system hang (for more information
> please see [1]).

This text is meandering a bit too much across a bunch of things and could be
made tighter... Just a nitpick anyway...

>  arch/x86/include/asm/kexec.h         |  4 ++--
>  arch/x86/include/asm/processor.h     |  2 ++
>  arch/x86/kernel/cpu/amd.c            | 17 +++++++++++++++++
>  arch/x86/kernel/machine_kexec_64.c   | 14 ++++++++++----
>  arch/x86/kernel/process.c            | 24 +++++++++++-------------
>  arch/x86/kernel/relocate_kernel_64.S | 13 ++++++++++---
>  6 files changed, 52 insertions(+), 22 deletions(-)

Reviewed-by: Borislav Petkov (AMD) <bp@...en8.de>

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ