[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YLn/SJtzuJopSO2x@myrica>
Date: Fri, 4 Jun 2021 12:24:08 +0200
From: Jean-Philippe Brucker <jean-philippe@...aro.org>
To: David Gibson <david@...son.dropbear.id.au>
Cc: Jason Gunthorpe <jgg@...dia.com>,
"Tian, Kevin" <kevin.tian@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
Joerg Roedel <joro@...tes.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
David Woodhouse <dwmw2@...radead.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Alex Williamson (alex.williamson@...hat.com)"
<alex.williamson@...hat.com>, Jason Wang <jasowang@...hat.com>,
Eric Auger <eric.auger@...hat.com>,
Jonathan Corbet <corbet@....net>,
"Raj, Ashok" <ashok.raj@...el.com>,
"Liu, Yi L" <yi.l.liu@...el.com>, "Wu, Hao" <hao.wu@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
Kirti Wankhede <kwankhede@...dia.com>,
Robin Murphy <robin.murphy@....com>
Subject: Re: [RFC] /dev/ioasid uAPI proposal
On Thu, Jun 03, 2021 at 03:45:09PM +1000, David Gibson wrote:
> > > But it would certainly be possible for a system to have two
> > > different host bridges with two different IOMMUs with different
> > > pagetable formats. Until you know which devices (and therefore
> > > which host bridge) you're talking about, you don't know what formats
> > > of pagetable to accept. And if you have devices from *both* bridges
> > > you can't bind a page table at all - you could theoretically support
> > > a kernel managed pagetable by mirroring each MAP and UNMAP to tables
> > > in both formats, but it would be pretty reasonable not to support
> > > that.
> >
> > The basic process for a user space owned pgtable mode would be:
> >
> > 1) qemu has to figure out what format of pgtable to use
> >
> > Presumably it uses query functions using the device label.
>
> No... in the qemu case it would always select the page table format
> that it needs to present to the guest. That's part of the
> guest-visible platform that's selected by qemu's configuration.
>
> There's no negotiation here: either the kernel can supply what qemu
> needs to pass to the guest, or it can't. If it can't qemu, will have
> to either emulate in SW (if possible, probably using a kernel-managed
> IOASID to back it) or fail outright.
>
> > The
> > kernel code should look at the entire device path through all the
> > IOMMU HW to determine what is possible.
> >
> > Or it already knows because the VM's vIOMMU is running in some
> > fixed page table format, or the VM's vIOMMU already told it, or
> > something.
>
> Again, I think you have the order a bit backwards. The user selects
> the capabilities that the vIOMMU will present to the guest as part of
> the qemu configuration. Qemu then requests that of the host kernel,
> and either the host kernel supplies it, qemu emulates it in SW, or
> qemu fails to start.
Hm, how fine a capability are we talking about? If it's just "give me
VT-d capabilities" or "give me Arm capabilities" that would work, but
probably isn't useful. Anything finer will be awkward because userspace
will have to try combinations of capabilities to see what sticks, and
supporting new hardware will drop compatibility for older one.
For example depending whether the hardware IOMMU is SMMUv2 or SMMUv3, that
completely changes the capabilities offered to the guest (some v2
implementations support nesting page tables, but never PASID nor PRI
unlike v3.) The same vIOMMU could support either, presenting different
capabilities to the guest, even multiple page table formats if we wanted
to be exhaustive (SMMUv2 supports the older 32-bit descriptor), but it
needs to know early on what the hardware is precisely. Then some new page
table format shows up and, although the vIOMMU can support that in
addition to older ones, QEMU will have to pick a single one, that it
assumes the guest knows how to drive?
I think once it binds a device to an IOASID fd, QEMU will want to probe
what hardware features are available before going further with the vIOMMU
setup (is there PASID, PRI, which page table formats are supported,
address size, page granule, etc). Obtaining precise information about the
hardware would be less awkward than trying different configurations until
one succeeds. Binding an additional device would then fail if its pIOMMU
doesn't support exactly the features supported for the first device,
because we don't know which ones the guest will choose. QEMU will have to
open a new IOASID fd for that device.
Thanks,
Jean
>
> Guest visible properties of the platform never (or *should* never)
> depend implicitly on host capabilities - it's impossible to sanely
> support migration in such an environment.
>
> > 2) qemu creates an IOASID and based on #1 and says 'I want this format'
>
> Right.
>
> > 3) qemu binds the IOASID to the device.
> >
> > If qmeu gets it wrong then it just fails.
>
> Right, though it may be fall back to (partial) software emulation. In
> practice that would mean using a kernel-managed IOASID and walking the
> guest IO pagetables itself to mirror them into the host kernel.
>
> > 4) For the next device qemu would have to figure out if it can re-use
> > an existing IOASID based on the required proeprties.
>
> Nope. Again, what devices share an IO address space is a guest
> visible part of the platform. If the host kernel can't supply that,
> then qemu must not start (or fail the hotplug if the new device is
> being hotplugged).
Powered by blists - more mailing lists