[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <166ac755-52c1-4dd2-8a7c-cd5feff11dd7@linux.alibaba.com>
Date: Thu, 27 Feb 2025 20:16:58 +0800
From: wzj <zijie.wei@...ux.alibaba.com>
To: kai.huang@...el.com
Cc: bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org, mingo@...hat.com,
pbonzini@...hat.com, seanjc@...gle.com, tglx@...utronix.de, x86@...nel.org,
xuyun_xy.xy@...ux.alibaba.com, zijie.wei@...ux.alibaba.com
Subject: Re: [PATCH Resend] KVM: x86: ioapic: Optimize EOI handling to reduce
unnecessary VM exits
On 2025/2/26 22:44, Huang, Kai wrote:
>
>
> On 25/02/2025 7:42 pm, weizijie wrote:
>> Address performance issues caused by a vector being reused by a
>> non-IOAPIC source.
>>
>> Commit 0fc5a36dd6b3
>> ("KVM: x86: ioapic: Fix level-triggered EOI and IOAPIC reconfigure race")
>> addressed the issues related to EOI and IOAPIC reconfiguration races.
>> However, it has introduced some performance concerns:
>>
>> Configuring IOAPIC interrupts while an interrupt request (IRQ) is
>> already in service can unintentionally trigger a VM exit for other
>> interrupts that normally do not require one, due to the settings of
>> `ioapic_handled_vectors`. If the IOAPIC is not reconfigured during
>> runtime, this issue persists, continuing to adversely affect
>> performance.
>
> Could you elaborate why the guest would configure the IOAPIC entry to
> use the same vector of an IRQ which is already in service? Is it some
> kinda temporary configuration (which means the guest will either the
> reconfigure the vector of the conflicting IRQ or the IOAPIC entry soon)?
>
> I.e., why would this issue persist?
>
> If such "persist" is due to guest bug or bad behaviour I am not sure we
> need to tackle that in KVM.
>
The previous patches:
db2bdcbbbd32 (KVM: x86: fix edge EOI and IOAPIC reconfig race)
0fc5a36dd6b3 (KVM: x86: ioapic: Fix level-triggered EOI and IOAPIC
reconfigure race)
both mentioned this issue.
For example, when there is an interrupt being processed on CPU0 with
vector 33, and an IOAPIC interrupt reconfiguration occurs at that
moment, an interrupt is assigned to CPU1, and this interrupt’s vector
is also 33 (it could also be another value). At this point, the
interrupt being processed, which originally did not need to cause a VM
exit, will now need to continuously cause a VM exit afterward.
You are absolutely correct; if the guest triggers an IOAPIC interrupt
reconfiguration again afterward and does not encounter the
aforementioned situation, then vector 33 on CPU0 will no longer need to
cause a VM exit.
>>
>> Simple Fix Proposal:
>> A straightforward solution is to record highest in-service IRQ that
>> is pending at the time of the last scan. Then, upon the next guest
>> exit, do a full KVM_REQ_SCAN_IOAPIC. This ensures that a re-scan of
>> the ioapic occurs only when the recorded vector is EOI'd, and
>> subsequently, the extra bit in the eoi_exit_bitmap are cleared,
>> avoiding unnecessary VM exits.
>>
>> Co-developed-by: xuyun <xuyun_xy.xy@...ux.alibaba.com>
>> Signed-off-by: xuyun <xuyun_xy.xy@...ux.alibaba.com>
>> Signed-off-by: weizijie <zijie.wei@...ux.alibaba.com>
>> ---
>> arch/x86/include/asm/kvm_host.h | 1 +
>> arch/x86/kvm/ioapic.c | 10 ++++++++--
>> arch/x86/kvm/irq_comm.c | 9 +++++++--
>> arch/x86/kvm/vmx/vmx.c | 9 +++++++++
>> 4 files changed, 25 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/
>> kvm_host.h
>> index 0b7af5902ff7..8c50e7b4a96f 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -1062,6 +1062,7 @@ struct kvm_vcpu_arch {
>> #if IS_ENABLED(CONFIG_HYPERV)
>> hpa_t hv_root_tdp;
>> #endif
>> + u8 last_pending_vector;
>> };
>> struct kvm_lpage_info {
>> diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
>> index 995eb5054360..40252a800897 100644
>> --- a/arch/x86/kvm/ioapic.c
>> +++ b/arch/x86/kvm/ioapic.c
>> @@ -297,10 +297,16 @@ void kvm_ioapic_scan_entry(struct kvm_vcpu
>> *vcpu, ulong *ioapic_handled_vectors)
>> u16 dm = kvm_lapic_irq_dest_mode(!!e->fields.dest_mode);
>> if (kvm_apic_match_dest(vcpu, NULL, APIC_DEST_NOSHORT,
>> - e->fields.dest_id, dm) ||
>> - kvm_apic_pending_eoi(vcpu, e->fields.vector))
>> + e->fields.dest_id, dm))
>> __set_bit(e->fields.vector,
>> ioapic_handled_vectors);
>> + else if (kvm_apic_pending_eoi(vcpu, e->fields.vector)) {
>> + __set_bit(e->fields.vector,
>> + ioapic_handled_vectors);
>> + vcpu->arch.last_pending_vector = e->fields.vector >
>> + vcpu->arch.last_pending_vector ? e->fields.vector :
>> + vcpu->arch.last_pending_vector;
>> + }
>> }
>> }
>> spin_unlock(&ioapic->lock);
>> diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c
>> index 8136695f7b96..1d23c52576e1 100644
>> --- a/arch/x86/kvm/irq_comm.c
>> +++ b/arch/x86/kvm/irq_comm.c
>> @@ -426,9 +426,14 @@ void kvm_scan_ioapic_routes(struct kvm_vcpu *vcpu,
>> if (irq.trig_mode &&
>> (kvm_apic_match_dest(vcpu, NULL, APIC_DEST_NOSHORT,
>> - irq.dest_id, irq.dest_mode) ||
>> - kvm_apic_pending_eoi(vcpu, irq.vector)))
>> + irq.dest_id, irq.dest_mode)))
>> __set_bit(irq.vector, ioapic_handled_vectors);
>> + else if (kvm_apic_pending_eoi(vcpu, irq.vector)) {
>> + __set_bit(irq.vector, ioapic_handled_vectors);
>> + vcpu->arch.last_pending_vector = irq.vector >
>> + vcpu->arch.last_pending_vector ? irq.vector :
>> + vcpu->arch.last_pending_vector;
>> + }
>> }
>> }
>> srcu_read_unlock(&kvm->irq_srcu, idx);
>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> index 6c56d5235f0f..047cdd5964e5 100644
>> --- a/arch/x86/kvm/vmx/vmx.c
>> +++ b/arch/x86/kvm/vmx/vmx.c
>> @@ -5712,6 +5712,15 @@ static int handle_apic_eoi_induced(struct
>> kvm_vcpu *vcpu)
>> /* EOI-induced VM exit is trap-like and thus no need to adjust
>> IP */
>> kvm_apic_set_eoi_accelerated(vcpu, vector);
>> +
>> + /* When there are instances where ioapic_handled_vectors is
>> + * set due to pending interrupts, clean up the record and do
>> + * a full KVM_REQ_SCAN_IOAPIC.
>> + */
>
> Comment style:
>
> /*
> * When ...
> */
>
Thank you very much for your suggestion.
>> + if (vcpu->arch.last_pending_vector == vector) {
>> + vcpu->arch.last_pending_vector = 0;
>> + kvm_make_request(KVM_REQ_SCAN_IOAPIC, vcpu);
>> + }
>
> As Sean commented before, this should be in a common code probably in
> kvm_ioapic_send_eoi().
>
I will move the modifications here and send a new patch.
> https://lore.kernel.org/all/Z2IDkWPz2rhDLD0P@google.com/
Best regards!
Powered by blists - more mailing lists