[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240927161657.68110-3-iorlov@amazon.com>
Date: Fri, 27 Sep 2024 16:16:56 +0000
From: Ivan Orlov <iorlov@...zon.com>
To: <bp@...en8.de>, <dave.hansen@...ux.intel.com>, <mingo@...hat.com>,
<pbonzini@...hat.com>, <seanjc@...gle.com>, <shuah@...nel.org>,
<tglx@...utronix.de>
CC: Ivan Orlov <iorlov@...zon.com>, <hpa@...or.com>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>,
<x86@...nel.org>, <jalliste@...zon.com>, <nh-open-source@...zon.com>,
<pdurrant@...zon.co.uk>
Subject: [PATCH 2/3] KVM: vmx, svm, mmu: Process MMIO during event delivery
Currently, the situation when guest accesses MMIO during event delivery
is handled differently in VMX and SVM: on VMX KVM returns internal error
with suberror = KVM_INTERNAL_ERROR_DELIVERY_EV, when SVM simply goes
into infinite loop trying to deliver an event again and again.
Such a situation could happen when the exception occurs with guest IDTR
(or GDTR) descriptor base pointing to an MMIO address.
Even with fixes for infinite loops on TDP failures applied, the problem
still exists on SVM.
Eliminate the SVM/VMX difference by returning a KVM internal error with
suberror = KVM_INTERNAL_ERROR_DELIVERY_EV for both SVM and VMX. As we
don't have a reliable way to detect MMIO operation on SVM before
actually looking at the GPA, move the problem detection into the common
KVM x86 layer (into the kvm_mmu_page_fault function) and add the
PFERR_EVT_DELIVERY flag which gets set in the SVM/VMX specific vmexit
handler to signal that we are in the middle of the event delivery.
This way we won't introduce much overhead for VMX platform either, as
the situation when the guest accesses MMIO during event delivery is
quite rare and shouldn't happen frequently.
Signed-off-by: Ivan Orlov <iorlov@...zon.com>
---
arch/x86/include/asm/kvm_host.h | 6 ++++++
arch/x86/kvm/mmu/mmu.c | 15 ++++++++++++++-
arch/x86/kvm/svm/svm.c | 4 ++++
arch/x86/kvm/vmx/vmx.c | 21 +++++++++------------
4 files changed, 33 insertions(+), 13 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 348daba424dd..a1088239c9f5 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -282,6 +282,12 @@ enum x86_intercept_stage;
#define PFERR_PRIVATE_ACCESS BIT_ULL(49)
#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS)
+/*
+ * EVT_DELIVERY is a KVM-defined flag used to indicate that a fault occurred
+ * during event delivery.
+ */
+#define PFERR_EVT_DELIVERY BIT_ULL(50)
+
/* apic attention bits */
#define KVM_APIC_CHECK_VAPIC 0
/*
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index e081f785fb23..36e25a6ba364 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6120,8 +6120,21 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err
return -EFAULT;
r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct);
- if (r == RET_PF_EMULATE)
+ if (r == RET_PF_EMULATE) {
+ /*
+ * Check if the guest is accessing MMIO during event delivery. For
+ * instance, it could happen if the guest sets IDT / GDT descriptor
+ * base to point to an MMIO address. We can't deliver such an event
+ * without VMM intervention, so return a corresponding internal error
+ * instead (otherwise, vCPU will fall into infinite loop trying to
+ * deliver the event again and again).
+ */
+ if (error_code & PFERR_EVT_DELIVERY) {
+ kvm_prepare_ev_delivery_failure_exit(vcpu, cr2_or_gpa, true);
+ return 0;
+ }
goto emulate;
+ }
}
if (r == RET_PF_INVALID) {
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 9df3e1e5ae81..93ce8c3d02f3 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2059,6 +2059,10 @@ static int npf_interception(struct kvm_vcpu *vcpu)
u64 fault_address = svm->vmcb->control.exit_info_2;
u64 error_code = svm->vmcb->control.exit_info_1;
+ /* Check if we have events awaiting delivery */
+ if (svm->vmcb->control.exit_int_info & SVM_EXITINTINFO_TYPE_MASK)
+ error_code |= PFERR_EVT_DELIVERY;
+
/*
* WARN if hardware generates a fault with an error code that collides
* with KVM-defined sythentic flags. Clear the flags and continue on,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index afd785e7f3a3..bbe1126c3c7d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5828,6 +5828,11 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
{
gpa_t gpa;
+ u64 error_code = PFERR_RSVD_MASK;
+
+ /* Do we have events awaiting delivery? */
+ error_code |= (to_vmx(vcpu)->idt_vectoring_info & VECTORING_INFO_VALID_MASK)
+ ? PFERR_EVT_DELIVERY : 0;
if (vmx_check_emulate_instruction(vcpu, EMULTYPE_PF, NULL, 0))
return 1;
@@ -5843,7 +5848,7 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
return kvm_skip_emulated_instruction(vcpu);
}
- return kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
+ return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
}
static int handle_nmi_window(struct kvm_vcpu *vcpu)
@@ -6536,24 +6541,16 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
return 0;
}
- /*
- * Note:
- * Do not try to fix EXIT_REASON_EPT_MISCONFIG if it caused by
- * delivery event since it indicates guest is accessing MMIO.
- * The vm-exit can be triggered again after return to guest that
- * will cause infinite loop.
- */
if ((vectoring_info & VECTORING_INFO_VALID_MASK) &&
(exit_reason.basic != EXIT_REASON_EXCEPTION_NMI &&
exit_reason.basic != EXIT_REASON_EPT_VIOLATION &&
exit_reason.basic != EXIT_REASON_PML_FULL &&
exit_reason.basic != EXIT_REASON_APIC_ACCESS &&
exit_reason.basic != EXIT_REASON_TASK_SWITCH &&
- exit_reason.basic != EXIT_REASON_NOTIFY)) {
+ exit_reason.basic != EXIT_REASON_NOTIFY &&
+ exit_reason.basic != EXIT_REASON_EPT_MISCONFIG)) {
gpa_t gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
- bool is_mmio = exit_reason.basic == EXIT_REASON_EPT_MISCONFIG;
-
- kvm_prepare_ev_delivery_failure_exit(vcpu, gpa, is_mmio);
+ kvm_prepare_ev_delivery_failure_exit(vcpu, gpa, false);
return 0;
}
--
2.43.0
Powered by blists - more mailing lists