lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMq6voIhXt7guI+W@yekko>
Date:   Thu, 17 Jun 2021 13:00:14 +1000
From:   David Gibson <david@...son.dropbear.id.au>
To:     Jean-Philippe Brucker <jean-philippe@...aro.org>
Cc:     Jason Gunthorpe <jgg@...dia.com>,
        "Tian, Kevin" <kevin.tian@...el.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Joerg Roedel <joro@...tes.org>,
        Lu Baolu <baolu.lu@...ux.intel.com>,
        David Woodhouse <dwmw2@...radead.org>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "Alex Williamson (alex.williamson@...hat.com)" 
        <alex.williamson@...hat.com>, Jason Wang <jasowang@...hat.com>,
        Eric Auger <eric.auger@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        "Raj, Ashok" <ashok.raj@...el.com>,
        "Liu, Yi L" <yi.l.liu@...el.com>, "Wu, Hao" <hao.wu@...el.com>,
        "Jiang, Dave" <dave.jiang@...el.com>,
        Jacob Pan <jacob.jun.pan@...ux.intel.com>,
        Kirti Wankhede <kwankhede@...dia.com>,
        Robin Murphy <robin.murphy@....com>
Subject: Re: [RFC] /dev/ioasid uAPI proposal

On Thu, Jun 10, 2021 at 06:37:31PM +0200, Jean-Philippe Brucker wrote:
> On Tue, Jun 08, 2021 at 04:31:50PM +1000, David Gibson wrote:
> > For the qemu case, I would imagine a two stage fallback:
> > 
> >     1) Ask for the exact IOMMU capabilities (including pagetable
> >        format) that the vIOMMU has.  If the host can supply, you're
> >        good
> > 
> >     2) If not, ask for a kernel managed IOAS.  Verify that it can map
> >        all the IOVA ranges the guest vIOMMU needs, and has an equal or
> >        smaller pagesize than the guest vIOMMU presents.  If so,
> >        software emulate the vIOMMU by shadowing guest io pagetable
> >        updates into the kernel managed IOAS.
> > 
> >     3) You're out of luck, don't start.
> >     
> > For both (1) and (2) I'd expect it to be asking this question *after*
> > saying what devices are attached to the IOAS, based on the virtual
> > hardware configuration.  That doesn't cover hotplug, of course, for
> > that you have to just fail the hotplug if the new device isn't
> > supportable with the IOAS you already have.
> 
> Yes. So there is a point in time when the IOAS is frozen, and cannot take
> in new incompatible devices. I think that can support the usage I had in
> mind. If the VMM (non-QEMU, let's say) wanted to create one IOASID FD per
> feature set it could bind the first device, freeze the features, then bind

Are you thinking of this "freeze the features" as an explicitly
triggered action?  I have suggested that an explicit "ENABLE" step
might be useful, but that hasn't had much traction from what I've
seen.

> the second device. If the second bind fails it creates a new FD, allowing
> to fall back to (2) for the second device while keeping (1) for the first
> device. A paravirtual IOMMU like virtio-iommu could easily support this as
> it describes pIOMMU properties for each device to the guest. An emulated
> vIOMMU could also support some hybrid cases as you describe below.

Eh.. in some cases.  The vIOMMU model will often dictate what guest
side devices need to share an an address space, which may make it very
impractical to have them in different address spaces on the host side.

> > One can imagine optimizations where for certain intermediate cases you
> > could do a lighter SW emu if the host supports a model that's close to
> > the vIOMMU one, and you're able to trap and emulate the differences.
> > In practice I doubt anyone's going to have time to look for such cases
> > and implement the logic for it.
> > 
> > > For example depending whether the hardware IOMMU is SMMUv2 or SMMUv3, that
> > > completely changes the capabilities offered to the guest (some v2
> > > implementations support nesting page tables, but never PASID nor PRI
> > > unlike v3.) The same vIOMMU could support either, presenting different
> > > capabilities to the guest, even multiple page table formats if we wanted
> > > to be exhaustive (SMMUv2 supports the older 32-bit descriptor), but it
> > > needs to know early on what the hardware is precisely. Then some new page
> > > table format shows up and, although the vIOMMU can support that in
> > > addition to older ones, QEMU will have to pick a single one, that it
> > > assumes the guest knows how to drive?
> > > 
> > > I think once it binds a device to an IOASID fd, QEMU will want to probe
> > > what hardware features are available before going further with the vIOMMU
> > > setup (is there PASID, PRI, which page table formats are supported,
> > > address size, page granule, etc). Obtaining precise information about the
> > > hardware would be less awkward than trying different configurations until
> > > one succeeds. Binding an additional device would then fail if its pIOMMU
> > > doesn't support exactly the features supported for the first device,
> > > because we don't know which ones the guest will choose. QEMU will have to
> > > open a new IOASID fd for that device.
> > 
> > No, this fundamentally misunderstands the qemu model.  The user
> > *chooses* the guest visible platform, and qemu supplies it or fails.
> > There is no negotiation with the guest, because this makes managing
> > migration impossibly difficult.
> 
> I'd like to understand better where the difficulty lies, with migration.
> Is the problem, once we have a guest running on physical machine A, to
> make sure that physical machine B supports the same IOMMU properties
> before migrating the VM over to B?  Why can't QEMU (instead of the user)
> select a feature set on machine A, then when time comes to migrate, query
> all information from the host kernel on machine B and check that it
> matches what was picked for machine A?  Or is it only trying to
> accommodate different sets of features between A and B, that would be too
> difficult?

There are two problems

1) Although it could be done in theory, it's hard, and it would need a
huge rewrite to qemu's whole migration infrastructure to do this.
We'd need a way of representing host features, working out which sets
are compatible with which others depending on what things the guest is
allowed to use, encoding the information in the migration stream and
reporting failure.  None of this exists now.

Indeed qemu requires that you create the (stopped) machine on the
destination (including virtual hardware configuration) before even
attempting to process the incoming migration.  It does not for the
most part transfer the machine configuration in the migration stream.
Now, that's generally considered a flaw with the design, but fixing it
is a huge project that no-one's really had the energy to begin despite
the idea being around for years.

2) It makes behaviour really hard to predict for management layers
above.  Things like oVirt automatically migrate around a cluster for
load balancing.  At the moment the model which works is basically that
you if you request the same guest features on each end of the
migration, and qemu starts with that configuration on each end, the
migration should work (or only fail for transient reasons).  If you
can't know if the migration is possible until you get the incoming
stream, reporting and exposing what will and won't work to the layer
above also becomes an immensely fiddly problem.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ