[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zaa1omCaDQOxxy2j@google.com>
Date: Tue, 16 Jan 2024 08:58:10 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Christian Borntraeger <borntraeger@...ux.ibm.com>
Cc: Yi Wang <up2wing@...il.com>, pbonzini@...hat.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org,
hpa@...or.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
wanpengli@...cent.com, Yi Wang <foxywang@...cent.com>,
Oliver Upton <oliver.upton@...ux.dev>, Marc Zyngier <maz@...nel.org>,
Anup Patel <anup@...infault.org>, Atish Patra <atishp@...shpatra.org>,
Janosch Frank <frankja@...ux.ibm.com>, Claudio Imbrenda <imbrenda@...ux.ibm.com>
Subject: Re: [PATCH] KVM: irqchip: synchronize srcu only if needed
On Tue, Jan 16, 2024, Christian Borntraeger wrote:
>
>
> Am 15.01.24 um 17:01 schrieb Yi Wang:
> > Many thanks for your such kind and detailed reply, Sean!
> >
> > On Sat, Jan 13, 2024 at 12:28 AM Sean Christopherson <seanjc@...gle.com> wrote:
> > >
> > > +other KVM maintainers
> > >
> > > On Fri, Jan 12, 2024, Yi Wang wrote:
> > > > From: Yi Wang <foxywang@...cent.com>
> > > >
> > > > We found that it may cost more than 20 milliseconds very accidentally
> > > > to enable cap of KVM_CAP_SPLIT_IRQCHIP on a host which has many vms
> > > > already.
> > > >
> > > > The reason is that when vmm(qemu/CloudHypervisor) invokes
> > > > KVM_CAP_SPLIT_IRQCHIP kvm will call synchronize_srcu_expedited() and
> > > > might_sleep and kworker of srcu may cost some delay during this period.
> > >
> > > might_sleep() yielding is not justification for changing KVM. That's more or
> > > less saying "my task got preempted and took longer to run". Well, yeah.
> >
> > Agree. But I suppose it may be one of the reasons that makes time of
> > KVM_CAP_SPLIT_IRQCHIP delayed, of course, the kworker has the biggest
> > suspicion :)
> >
> > >
> > > > Since this happens during creating vm, it's no need to synchronize srcu
> > > > now 'cause everything is not ready(vcpu/irqfd) and none uses irq_srcu now.
> >
> > ....
> >
> > > And on x86, I'm pretty sure as of commit 654f1f13ea56 ("kvm: Check irqchip mode
> > > before assign irqfd"), which added kvm_arch_irqfd_allowed(), it's impossible for
> > > kvm_irq_map_gsi() to encounter a NULL irq_routing _on x86_.
> > >
> > > But I strongly suspect other architectures can reach kvm_irq_map_gsi() with a
> > > NULL irq_routing, e.g. RISC-V dynamically configures its interrupt controller,
> > > yet doesn't implement kvm_arch_intc_initialized().
> > >
> > > So instead of special casing x86, what if we instead have KVM setup an empty
> > > IRQ routing table during kvm_create_vm(), and then avoid this mess entirely?
> > > That way x86 and s390 no longer need to set empty/dummy routing when creating
> > > an IRQCHIP, and the worst case scenario of userspace misusing an ioctl() is no
> > > longer a NULL pointer deref.
>
> Sounds like a good idea. This should also speedup guest creation on s390 since
> it would avoid one syncronize_irq.
> >
> > To setup an empty IRQ routing table during kvm_create_vm() sounds a good idea,
> > at this time vCPU have not been created and kvm->lock is held so skipping
> > synchronization is safe here.
> >
> > However, there is one drawback, if vmm wants to emulate irqchip itself,
> > e.g. qemu with command line '-machine kernel-irqchip=off' may not need
> > irqchip in kernel. How do we handle this issue?
>
> I would be fine with wasted memory.
+1. If we really, really want to avoid the negligible memory overhead, we could
pre-configure a static global table and directly use that as the dummy table (and
exempt it from being freed by free_irq_routing_table()).
> The only question is does it have a functional impact or can we simply ignore
> the dummy routing.
Given the lack of sanity checks on kvm->irq_routing, I'm pretty sure the only way
for there to be functional impact is if there's a latent NULL pointer deref hiding
somewhere.
Powered by blists - more mailing lists