[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87bl7wczkp.ffs@nanos.tec.linutronix.de>
Date: Wed, 23 Jun 2021 18:31:34 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: "Tian\, Kevin" <kevin.tian@...el.com>,
Alex Williamson <alex.williamson@...hat.com>
Cc: Jason Gunthorpe <jgg@...dia.com>,
"Dey\, Megha" <megha.dey@...el.com>,
"Raj\, Ashok" <ashok.raj@...el.com>,
"Pan\, Jacob jun" <jacob.jun.pan@...el.com>,
"Jiang\, Dave" <dave.jiang@...el.com>,
"Liu\, Yi L" <yi.l.liu@...el.com>,
"Lu\, Baolu" <baolu.lu@...el.com>,
"Williams\, Dan J" <dan.j.williams@...el.com>,
"Luck\, Tony" <tony.luck@...el.com>,
"Kumar\, Sanjay K" <sanjay.k.kumar@...el.com>,
LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>,
Kirti Wankhede <kwankhede@...dia.com>,
Peter Zijlstra <peterz@...radead.org>,
Marc Zyngier <maz@...nel.org>,
Bjorn Helgaas <helgaas@...nel.org>
Subject: RE: Virtualizing MSI-X on IMS via VFIO
On Wed, Jun 23 2021 at 06:12, Kevin Tian wrote:
>> From: Thomas Gleixner <tglx@...utronix.de>
>> So the only downside today of allocating more MSI-X vectors than
>> necessary is memory consumption for the irq descriptors.
>
> Curious about irte entry when IRQ remapping is enabled. Is it also
> allocated at request_irq()?
Good question. No, it has to be allocated right away. We stick the
shutdown vector into the IRTE and then request_irq() will update it with
the real one.
> So the correct flow is like below:
>
> guest::enable_msix()
> trapped_by_host()
> pci_alloc_irq_vectors(); // for all possible vMSI-X entries
> pci_enable_msix();
>
> guest::unmask()
> trapped_by_host()
> request_irqs();
>
> the first trap calls a new VFIO ioctl e.g. VFIO_DEVICE_ALLOC_IRQS.
>
> the 2nd trap can reuse existing VFIO_DEVICE_SET_IRQS which just
> does request_irq() if specified irqs have been allocated.
>
> Then map ims to this flow:
>
> guest::enable_msix()
> trapped_by_host()
> msi_domain_alloc_irqs(); // for all possible vMSI-X entries
> for_all_allocated_irqs(i)
> pci_update_msi_desc_id(i, default_pasid); // a new helper func
>
> guest::unmask(entry#0)
> trapped_by_host()
> request_irqs();
> ims_array_irq_startup(); // write msi_desc.id (default_pasid) to ims entry
>
> guest::set_msix_perm(entry#1, guest_sva_pasid)
> trapped_by_host()
> pci_update_msi_desc_id(1, host_sva_pasid);
>
> guest::unmask(entry#1)
> trapped_by_host()
> request_irqs();
> ims_array_irq_startup(); // write msi_desc.id (host_sva_pasid) to ims entry
That's one way to do that, but that still has the same problem that the
request_irq() in the guest succeeds even if the host side fails.
As this is really new stuff there is no real good reason to force that
into the existing VFIO/MSIX stuff with all it's known downsides and
limitations.
The point is, that IMS can just add another interrupt to a device on the
fly without doing any of the PCI/MSIX nasties. So why not take advantage
of that?
I can see the point of using PCI to expose the device to the guest
because it's trivial to enumerate, but contrary to VF devices there is
no legacy and the mechanism how to setup the device interrupts can be
completely different from PCI/MSIX.
Exposing some trappable "IMS" storage in a separate PCI bar won't cut it
because this still has the same problem that the allocation or
request_irq() on the host can fail w/o feedback.
So IMO creating a proper paravirt interface is the right approach. It
avoids _all_ of the trouble and will be necessary anyway once you want
to support devices which store the message/pasid in system memory and
not in on-device memory.
Thanks,
tglx
Powered by blists - more mailing lists