[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241222090148.5363-1-zijie.wei@linux.alibaba.com>
Date: Sun, 22 Dec 2024 17:01:48 +0800
From: weizijie <zijie.wei@...ux.alibaba.com>
To: seanjc@...gle.com,
pbonzini@...hat.com,
tglx@...utronix.de,
mingo@...hat.com,
bp@...en8.de,
dave.hansen@...ux.intel.com,
x86@...nel.org,
kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
hpa@...or.com
Cc: weizijie <zijie.wei@...ux.alibaba.com>,
xuyun <xuyun_xy.xy@...ux.alibaba.com>
Subject: [PATCH v2] KVM: x86: ioapic: Optimize EOI handling to reduce unnecessary VM exits
Address performance issues caused by a vector being reused by a
non-IOAPIC source.
commit 0fc5a36dd6b3
("KVM: x86: ioapic: Fix level-triggered EOI and IOAPIC reconfigure race")
addressed the issues related to EOI and IOAPIC reconfiguration races.
However, it has introduced some performance concerns:
Configuring IOAPIC interrupts while an interrupt request (IRQ) is
already in service can unintentionally trigger a VM exit for other
interrupts that normally do not require one, due to the settings of
`ioapic_handled_vectors`. If the IOAPIC is not reconfigured during
runtime, this issue persists, continuing to adversely affect
performance.
Simple Fix Proposal:
A straightforward solution is to record a vector that is pending at
the time of the last scan. Then, upon the next guest exit, do a full
KVM_REQ_SCAN_IOAPIC. This ensures that a re-scan of the ioapic occurs
only when the recorded vector is EOI'd, and subsequently, the extra
bit in the eoi_exit_bitmap are cleared, avoiding unnecessary VM exits.
Co-developed-by: xuyun <xuyun_xy.xy@...ux.alibaba.com>
Signed-off-by: xuyun <xuyun_xy.xy@...ux.alibaba.com>
Signed-off-by: weizijie <zijie.wei@...ux.alibaba.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/ioapic.c | 8 ++++++--
arch/x86/kvm/irq_comm.c | 7 +++++--
arch/x86/kvm/vmx/vmx.c | 9 +++++++++
4 files changed, 21 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e159e44a6a1b..f84a4881afa4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1041,6 +1041,7 @@ struct kvm_vcpu_arch {
#if IS_ENABLED(CONFIG_HYPERV)
hpa_t hv_root_tdp;
#endif
+ u8 last_pending_vector;
};
struct kvm_lpage_info {
diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
index 995eb5054360..6b203f0847ec 100644
--- a/arch/x86/kvm/ioapic.c
+++ b/arch/x86/kvm/ioapic.c
@@ -297,10 +297,14 @@ void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, ulong *ioapic_handled_vectors)
u16 dm = kvm_lapic_irq_dest_mode(!!e->fields.dest_mode);
if (kvm_apic_match_dest(vcpu, NULL, APIC_DEST_NOSHORT,
- e->fields.dest_id, dm) ||
- kvm_apic_pending_eoi(vcpu, e->fields.vector))
+ e->fields.dest_id, dm))
__set_bit(e->fields.vector,
ioapic_handled_vectors);
+ else if (kvm_apic_pending_eoi(vcpu, e->fields.vector)) {
+ __set_bit(e->fields.vector,
+ ioapic_handled_vectors);
+ vcpu->arch.last_pending_vector = e->fields.vector;
+ }
}
}
spin_unlock(&ioapic->lock);
diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c
index 8136695f7b96..ca45be1503f4 100644
--- a/arch/x86/kvm/irq_comm.c
+++ b/arch/x86/kvm/irq_comm.c
@@ -426,9 +426,12 @@ void kvm_scan_ioapic_routes(struct kvm_vcpu *vcpu,
if (irq.trig_mode &&
(kvm_apic_match_dest(vcpu, NULL, APIC_DEST_NOSHORT,
- irq.dest_id, irq.dest_mode) ||
- kvm_apic_pending_eoi(vcpu, irq.vector)))
+ irq.dest_id, irq.dest_mode)))
__set_bit(irq.vector, ioapic_handled_vectors);
+ else if (kvm_apic_pending_eoi(vcpu, irq.vector)) {
+ __set_bit(irq.vector, ioapic_handled_vectors);
+ vcpu->arch.last_pending_vector = irq.vector;
+ }
}
}
srcu_read_unlock(&kvm->irq_srcu, idx);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0f008f5ef6f0..2abf67e76780 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5710,6 +5710,15 @@ static int handle_apic_eoi_induced(struct kvm_vcpu *vcpu)
/* EOI-induced VM exit is trap-like and thus no need to adjust IP */
kvm_apic_set_eoi_accelerated(vcpu, vector);
+
+ /* When there are instances where ioapic_handled_vectors is
+ * set due to pending interrupts, clean up the record and do
+ * a full KVM_REQ_SCAN_IOAPIC.
+ */
+ if (vcpu->arch.last_pending_vector == vector) {
+ vcpu->arch.last_pending_vector = 0;
+ kvm_make_request(KVM_REQ_SCAN_IOAPIC, vcpu);
+ }
return 1;
}
--
2.43.5
Powered by blists - more mailing lists