lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aV1StCzKWxAQ-B93@google.com>
Date: Tue, 6 Jan 2026 10:21:40 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Kevin Cheng <chengkev@...gle.com>
Cc: pbonzini@...hat.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org, 
	yosry.ahmed@...ux.dev
Subject: Re: [PATCH 1/2] KVM: SVM: Generate #UD for certain instructions when
 SVME.EFER is disabled

On Tue, Jan 06, 2026, Kevin Cheng wrote:
> The AMD APM states that VMRUN, VMLOAD, VMSAVE, CLGI, VMMCALL, and
> INVLPGA instructions should generate a #UD when EFER.SVME is cleared.
> Currently, when VMLOAD, VMSAVE, or CLGI are executed in L1 with
> EFER.SVME cleared, no #UD is generated in certain cases. This is because
> the intercepts for these instructions are cleared based on whether or
> not vls or vgif is enabled. The #UD fails to be generated when the
> intercepts are absent.
> 
> INVLPGA is always intercepted, but there is no call to
> nested_svm_check_permissions() which is responsible for checking
> EFER.SVME and queuing the #UD exception.

Please split the INVLPGA fix to a separate patch, it's very much a separate
logical change.  That will allow for more precise shortlogs, e.g.

  KVM: SVM: Recalc instructions intercepts when EFER.SVME is toggled

and

  KVM: SVM: Inject #UD for INVLPGA if EFER.SVME=0

> Fix the missing #UD generation by ensuring that all relevant
> instructions have intercepts set when SVME.EFER is disabled and that the
> exit handlers contain the necessary checks.
> 
> VMMCALL is special because KVM's ABI is that VMCALL/VMMCALL are always
> supported for L1 and never fault.
> 
> Signed-off-by: Kevin Cheng <chengkev@...gle.com>
> ---
>  arch/x86/kvm/svm/svm.c | 27 +++++++++++++++++++++++++--
>  1 file changed, 25 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 24d59ccfa40d9..fc1b8707bb00c 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -228,6 +228,14 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
>  			if (!is_smm(vcpu))
>  				svm_free_nested(svm);
>  
> +			/*
> +			 * If EFER.SVME is being cleared, we must intercept these

No pronouns.

			/*
			 * Intercept instructions that #UD if EFER.SVME=0, as
			 * SVME must be set even when running the guest, i.e.
			 * hardware will only ever see EFER.SVME=1.
			 */

> +			 * instructions to ensure #UD is generated.
> +			 */
> +			svm_set_intercept(svm, INTERCEPT_CLGI);

What about STGI?  Per the APM, it #UDs if:

  Secure Virtual Machine was not enabled (EFER.SVME=0) and both of the following
  conditions were true:
    • SVM Lock is not available, as indicated by CPUID Fn8000_000A_EDX[SVML] = 0.
    • DEV is not available, as indicated by CPUID Fn8000_0001_ECX[SKINIT] = 0.


And this code in init_vmcb() can/should be dropped:

	if (vgif) {
		svm_clr_intercept(svm, INTERCEPT_STGI);
		svm_clr_intercept(svm, INTERCEPT_CLGI);
		svm->vmcb->control.int_ctl |= V_GIF_ENABLE_MASK;
	}

> +			svm_set_intercept(svm, INTERCEPT_VMSAVE);
> +			svm_set_intercept(svm, INTERCEPT_VMLOAD);
> +			svm->vmcb->control.virt_ext &= ~VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
>  		} else {
>  			int ret = svm_allocate_nested(svm);
>  
> @@ -242,6 +250,15 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
>  			 */
>  			if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm))
>  				set_exception_intercept(svm, GP_VECTOR);
> +
> +			if (vgif)
> +				svm_clr_intercept(svm, INTERCEPT_CLGI);
> +
> +			if (vls) {
> +				svm_clr_intercept(svm, INTERCEPT_VMSAVE);
> +				svm_clr_intercept(svm, INTERCEPT_VMLOAD);
> +				svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;

This is wrong.  In the rather absurd scenario that the vCPU model presented to
the guest is an Intel CPU, KVM needs to intercept VMSAVE/VMLOAD to deal with the
SYSENTER MSRs.

This logic will also get blasted away if svm_recalc_instruction_intercepts()
runs.

So rather than manually handle the intercepts in svm_set_efer() and fight recalcs,
trigger KVM_REQ_RECALC_INTERCEPTS and teach svm_recalc_instruction_intercepts()
about EFER.SVME handling.

After the dust settles, it might make sense to move the #GP intercept logic into
svm_recalc_intercepts() as well, but that's not a priority.

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 24d59ccfa40d..0b5e6a7e004b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -243,6 +243,8 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
                        if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm))
                                set_exception_intercept(svm, GP_VECTOR);
                }
+
+               kvm_make_request(KVM_REQ_RECALC_INTERCEPTS, vcpu);
        }
 
        svm->vmcb->save.efer = efer | EFER_SVME;


> +			}
>  		}
>  	}
>  
> @@ -2291,8 +2308,14 @@ static int clgi_interception(struct kvm_vcpu *vcpu)
>  
>  static int invlpga_interception(struct kvm_vcpu *vcpu)
>  {
> -	gva_t gva = kvm_rax_read(vcpu);
> -	u32 asid = kvm_rcx_read(vcpu);
> +	gva_t gva;
> +	u32 asid;
> +
> +	if (nested_svm_check_permissions(vcpu))
> +		return 1;

Please split the INVLPGA fix to a separate patch.

> +
> +	gva = kvm_rax_read(vcpu);
> +	asid = kvm_rcx_read(vcpu);

Eh, I'd rather keep the immediate initialization of gva and asid.  Reading RAX
and RCX is basically free and completely harmless, and in all likelihood the
compiler will defer the loads until after the permission checks anyways.

>  
>  	/* FIXME: Handle an address size prefix. */
>  	if (!is_long_mode(vcpu))
> -- 
> 2.52.0.351.gbe84eed79e-goog
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ