lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aOdJ7JZWsfanX0JV@intel.com>
Date: Thu, 9 Oct 2025 13:36:44 +0800
From: Chao Gao <chao.gao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: <kvm@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] KVM: VMX: Flush shadow VMCS on emergency reboot

On Wed, Oct 08, 2025 at 04:01:07PM -0700, Sean Christopherson wrote:
>Trimmed Cc: to lists, as this is basically off-topic, but I thought you might
>be amused :-)
>
>On Thu, Apr 10, 2025, Sean Christopherson wrote:
>> On Mon, Mar 24, 2025, Chao Gao wrote:
>> > Ensure the shadow VMCS cache is evicted during an emergency reboot to
>> > prevent potential memory corruption if the cache is evicted after reboot.
>> 
>> I don't suppose Intel would want to go on record and state what CPUs would actually
>> be affected by this bug.  My understanding is that Intel has never shipped a CPU
>> that caches shadow VMCS state.

Yes. Shadow VMCSs are never cached. But this is an implementation detail.
Per SDM, software is required to VMCLEAR a shadow VMCS that was made active
to be forward compatible.

>> 
>> On a very related topic, doesn't SPR+ now flush the VMCS caches on VMXOFF?  If
>> that's going to be the architectural behavior going forward, will that behavior
>> be enumerated to software?  Regardless of whether there's software enumeration,
>> I would like to have the emergency disable path depend on that behavior.  In part
>> to gain confidence that SEAM VMCSes won't screw over kdump, but also in light of
>> this bug.

Yes. The current implementation is that CPUs with SEAM support flush _all_
VMCS caches on VMXOFF. But the architectural behavior is trending toward
CPUs that enumerate IA32_VMX_PROCBASED_CTRLS3[5] as 1 flushing _SEAM_ VMCS
caches on VMXOFF.

>
>Apparently I completely purged it from my memory, but while poking through an
>internal branch related to moving VMXON out of KVM, I came across this:
>
>--
>Author:     Sean Christopherson <seanjc@...gle.com>
>AuthorDate: Wed Jan 17 16:19:28 2024 -0800
>Commit:     Sean Christopherson <seanjc@...gle.com>
>CommitDate: Fri Jan 26 13:16:31 2024 -0800
>
>    KVM: VMX: VMCLEAR loaded shadow VMCSes on kexec()
>    
>    Add a helper to VMCLEAR _all_ loaded VMCSes in a loaded_vmcs pair, and use
>    it when doing VMCLEAR before kexec() after a crash to fix a (likely benign)
>    bug where KVM neglects to VMCLEAR loaded shadow VMCSes.  The bug is likely
>    benign as existing Intel CPUs don't insert shadow VMCSes into the VMCS
>    cache, i.e. shadow VMCSes can't be evicted since they're never cached, and
>    thus won't clobber memory in the new kernel.
>
>--
>
>At least my reaction was more or less the same both times?
>
>> If all past CPUs never cache shadow VMCS state, and all future CPUs flush the
>> caches on VMXOFF, then this is a glorified NOP, and thus probably shouldn't be
>> tagged for stable.
>> 
>> > This issue was identified through code inspection, as __loaded_vmcs_clear()
>> > flushes both the normal VMCS and the shadow VMCS.
>> > 
>> > Avoid checking the "launched" state during an emergency reboot, unlike the
>> > behavior in __loaded_vmcs_clear(). This is important because reboot NMIs
>> > can interfere with operations like copy_shadow_to_vmcs12(), where shadow
>> > VMCSes are loaded directly using VMPTRLD. In such cases, if NMIs occur
>> > right after the VMCS load, the shadow VMCSes will be active but the
>> > "launched" state may not be set.
>> > 
>> > Signed-off-by: Chao Gao <chao.gao@...el.com>
>> > ---
>> >  arch/x86/kvm/vmx/vmx.c | 5 ++++-
>> >  1 file changed, 4 insertions(+), 1 deletion(-)
>> > 
>> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> > index b70ed72c1783..dccd1c9939b8 100644
>> > --- a/arch/x86/kvm/vmx/vmx.c
>> > +++ b/arch/x86/kvm/vmx/vmx.c
>> > @@ -769,8 +769,11 @@ void vmx_emergency_disable_virtualization_cpu(void)
>> >  		return;
>> >  
>> >  	list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu),
>> > -			    loaded_vmcss_on_cpu_link)
>> > +			    loaded_vmcss_on_cpu_link) {
>> >  		vmcs_clear(v->vmcs);
>> > +		if (v->shadow_vmcs)
>> > +			vmcs_clear(v->shadow_vmcs);
>> > +	}
>> >  
>> >  	kvm_cpu_vmxoff();
>> >  }
>> > -- 
>> > 2.46.1
>> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ