[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MWHPR11MB1886BB017C6C53A8061DDEE28C089@MWHPR11MB1886.namprd11.prod.outlook.com>
Date: Wed, 23 Jun 2021 23:37:03 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Alex Williamson <alex.williamson@...hat.com>
CC: Jason Gunthorpe <jgg@...dia.com>,
"Dey, Megha" <megha.dey@...el.com>,
"Raj, Ashok" <ashok.raj@...el.com>,
"Pan, Jacob jun" <jacob.jun.pan@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>,
"Liu, Yi L" <yi.l.liu@...el.com>, "Lu, Baolu" <baolu.lu@...el.com>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"Luck, Tony" <tony.luck@...el.com>,
"Kumar, Sanjay K" <sanjay.k.kumar@...el.com>,
LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>,
Kirti Wankhede <kwankhede@...dia.com>,
"Peter Zijlstra" <peterz@...radead.org>,
Marc Zyngier <maz@...nel.org>,
"Bjorn Helgaas" <helgaas@...nel.org>
Subject: RE: Virtualizing MSI-X on IMS via VFIO
> From: Thomas Gleixner <tglx@...utronix.de>
> Sent: Thursday, June 24, 2021 12:32 AM
>
> On Wed, Jun 23 2021 at 06:12, Kevin Tian wrote:
> >> From: Thomas Gleixner <tglx@...utronix.de>
> >> So the only downside today of allocating more MSI-X vectors than
> >> necessary is memory consumption for the irq descriptors.
> >
> > Curious about irte entry when IRQ remapping is enabled. Is it also
> > allocated at request_irq()?
>
> Good question. No, it has to be allocated right away. We stick the
> shutdown vector into the IRTE and then request_irq() will update it with
> the real one.
There are max 64K irte entries per Intel VT-d. Do we consider it as
a limited resource in this new model, though it's much more than
CPU vectors?
>
> > So the correct flow is like below:
> >
> > guest::enable_msix()
> > trapped_by_host()
> > pci_alloc_irq_vectors(); // for all possible vMSI-X entries
> > pci_enable_msix();
> >
> > guest::unmask()
> > trapped_by_host()
> > request_irqs();
> >
> > the first trap calls a new VFIO ioctl e.g. VFIO_DEVICE_ALLOC_IRQS.
> >
> > the 2nd trap can reuse existing VFIO_DEVICE_SET_IRQS which just
> > does request_irq() if specified irqs have been allocated.
> >
> > Then map ims to this flow:
> >
> > guest::enable_msix()
> > trapped_by_host()
> > msi_domain_alloc_irqs(); // for all possible vMSI-X entries
> > for_all_allocated_irqs(i)
> > pci_update_msi_desc_id(i, default_pasid); // a new helper func
> >
> > guest::unmask(entry#0)
> > trapped_by_host()
> > request_irqs();
> > ims_array_irq_startup(); // write msi_desc.id (default_pasid) to ims
> entry
> >
> > guest::set_msix_perm(entry#1, guest_sva_pasid)
> > trapped_by_host()
> > pci_update_msi_desc_id(1, host_sva_pasid);
> >
> > guest::unmask(entry#1)
> > trapped_by_host()
> > request_irqs();
> > ims_array_irq_startup(); // write msi_desc.id (host_sva_pasid) to ims
> entry
>
> That's one way to do that, but that still has the same problem that the
> request_irq() in the guest succeeds even if the host side fails.
yes
>
> As this is really new stuff there is no real good reason to force that
> into the existing VFIO/MSIX stuff with all it's known downsides and
> limitations.
>
> The point is, that IMS can just add another interrupt to a device on the
> fly without doing any of the PCI/MSIX nasties. So why not take advantage
> of that?
>
> I can see the point of using PCI to expose the device to the guest
> because it's trivial to enumerate, but contrary to VF devices there is
also about compatibility since PCI is supported by almost all OSes.
> no legacy and the mechanism how to setup the device interrupts can be
> completely different from PCI/MSIX.
>
> Exposing some trappable "IMS" storage in a separate PCI bar won't cut it
> because this still has the same problem that the allocation or
> request_irq() on the host can fail w/o feedback.
yes to fully fix the said nasty some feedback mechanism is required.
>
> So IMO creating a proper paravirt interface is the right approach. It
> avoids _all_ of the trouble and will be necessary anyway once you want
> to support devices which store the message/pasid in system memory and
> not in on-device memory.
>
While I agree a paravirt interface is definitely cleaner, I wonder whether
this should be done in orthogonal or tied to all new ims-capable devices.
Back to earlier discussion about guest ims support, you explained a layered
model where the paravirt interface sits between msi domain and vector
domain to get addr/data pair from the host. In this way it could provide
a feedback mechanism for both msi and ims devices, thus not specific
to ims only. Then considering the transition window where not all guest
OSes may support paravirt interface at the same time (or there are
multiple paravirt interfaces which takes time for host to support all),
would below staging approach still makes sense?
1) Fix the lost interrupt issue in existing MSI virtualization flow;
2) Virtualize MSI-X on IMS, bearing the same request_irq() problem;
3) Develop a paravirt interface to solve request_irq() problem for
both msi and ims devices;
Thanks
Kevin
Powered by blists - more mailing lists