[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aJTytueCqmZXtbUk@google.com>
Date: Thu, 7 Aug 2025 11:38:46 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: hugo lee <cs.hugolee@...il.com>
Cc: pbonzini@...hat.com, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, x86@...nel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Yuguo Li <hugoolli@...cent.com>
Subject: Re: [PATCH] KVM: x86: Synchronize APIC State with QEMU when irqchip=split
On Thu, Aug 07, 2025, hugo lee wrote:
> On Thu, Aug 7, 2025 Sean Christopherson wrote:
> >
> > On Wed, Aug 06, 2025, Yuguo Li wrote:
> > > When using split irqchip mode, IOAPIC is handled by QEMU while the LAPIC is
> > > emulated by KVM. When guest disables LINT0, KVM doesn't exit to QEMU for
> > > synchronization, leaving IOAPIC unaware of this change. This may cause vCPU
> > > to be kicked when external devices(e.g. PIT)keep sending interrupts.
> >
> > I don't entirely follow what the problem is. Is the issue that QEMU injects an
> > IRQ that should have been blocked? Or is QEMU forcing the vCPU to exit unnecessarily?
> >
>
> This issue is about QEMU keeps injecting should-be-blocked
> (blocked by guest and qemu just doesn't know that) IRQs.
> As a result, QEMU forces vCPU to exit unnecessarily.
Is the problem that the guest receives spurious IRQs, or that QEMU is forcing
unnecesary exits, i.e hurting performance?
> > > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> > > index 8172c2042dd6..65ffa89bf8a6 100644
> > > --- a/arch/x86/kvm/lapic.c
> > > +++ b/arch/x86/kvm/lapic.c
> > > @@ -2329,6 +2329,10 @@ static int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
> > > val |= APIC_LVT_MASKED;
> > > val &= apic_lvt_mask[index];
> > > kvm_lapic_set_reg(apic, reg, val);
> > > + if (irqchip_split(apic->vcpu->kvm) && (val & APIC_LVT_MASKED)) {
> >
> > This applies to much more than just LINT0, and for at least LVTPC and LVTCMCI,
> > KVM definitely doesn't want to exit on every change.
>
> Actually every masking on LAPIC should be synchronized with IOAPIC.
No, because not all LVT entries can be wired up to the I/O APIC.
> Because any desynchronization may cause unnecessary kicks
> which rarely happens due to the well-behaving guests.
> Exits here won't harm, but maybe only exit when LINT0 is being masked?
Exits here absolutely will harm the VM by generating spurious slow path exits.
> Since others unlikely cause exits.
On Intel, LVTPC is masked on every PMI.
> > Even for LINT0, it's not obvious that "pushing" from KVM is a better option than
> > having QEMU "pull" as needed.
> >
>
> QEMU has no idea when LINT0 is masked by the guest. Then the problem becomes
> when it is needed to "pull". The guess on this could lead to extra costs.
So this patch is motivated by performance?
Powered by blists - more mailing lists