[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260106041250.2125920-2-chengkev@google.com>
Date: Tue, 6 Jan 2026 04:12:49 +0000
From: Kevin Cheng <chengkev@...gle.com>
To: seanjc@...gle.com, pbonzini@...hat.com
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, yosry.ahmed@...ux.dev,
Kevin Cheng <chengkev@...gle.com>
Subject: [PATCH 1/2] KVM: SVM: Generate #UD for certain instructions when
SVME.EFER is disabled
The AMD APM states that VMRUN, VMLOAD, VMSAVE, CLGI, VMMCALL, and
INVLPGA instructions should generate a #UD when EFER.SVME is cleared.
Currently, when VMLOAD, VMSAVE, or CLGI are executed in L1 with
EFER.SVME cleared, no #UD is generated in certain cases. This is because
the intercepts for these instructions are cleared based on whether or
not vls or vgif is enabled. The #UD fails to be generated when the
intercepts are absent.
INVLPGA is always intercepted, but there is no call to
nested_svm_check_permissions() which is responsible for checking
EFER.SVME and queuing the #UD exception.
Fix the missing #UD generation by ensuring that all relevant
instructions have intercepts set when SVME.EFER is disabled and that the
exit handlers contain the necessary checks.
VMMCALL is special because KVM's ABI is that VMCALL/VMMCALL are always
supported for L1 and never fault.
Signed-off-by: Kevin Cheng <chengkev@...gle.com>
---
arch/x86/kvm/svm/svm.c | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 24d59ccfa40d9..fc1b8707bb00c 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -228,6 +228,14 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
if (!is_smm(vcpu))
svm_free_nested(svm);
+ /*
+ * If EFER.SVME is being cleared, we must intercept these
+ * instructions to ensure #UD is generated.
+ */
+ svm_set_intercept(svm, INTERCEPT_CLGI);
+ svm_set_intercept(svm, INTERCEPT_VMSAVE);
+ svm_set_intercept(svm, INTERCEPT_VMLOAD);
+ svm->vmcb->control.virt_ext &= ~VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
} else {
int ret = svm_allocate_nested(svm);
@@ -242,6 +250,15 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
*/
if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm))
set_exception_intercept(svm, GP_VECTOR);
+
+ if (vgif)
+ svm_clr_intercept(svm, INTERCEPT_CLGI);
+
+ if (vls) {
+ svm_clr_intercept(svm, INTERCEPT_VMSAVE);
+ svm_clr_intercept(svm, INTERCEPT_VMLOAD);
+ svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
+ }
}
}
@@ -2291,8 +2308,14 @@ static int clgi_interception(struct kvm_vcpu *vcpu)
static int invlpga_interception(struct kvm_vcpu *vcpu)
{
- gva_t gva = kvm_rax_read(vcpu);
- u32 asid = kvm_rcx_read(vcpu);
+ gva_t gva;
+ u32 asid;
+
+ if (nested_svm_check_permissions(vcpu))
+ return 1;
+
+ gva = kvm_rax_read(vcpu);
+ asid = kvm_rcx_read(vcpu);
/* FIXME: Handle an address size prefix. */
if (!is_long_mode(vcpu))
--
2.52.0.351.gbe84eed79e-goog
Powered by blists - more mailing lists