[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210603154407.6fe33880.alex.williamson@redhat.com>
Date: Thu, 3 Jun 2021 15:44:07 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: "Tian, Kevin" <kevin.tian@...el.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
"Jiang, Dave" <dave.jiang@...el.com>,
"Raj, Ashok" <ashok.raj@...el.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
Jonathan Corbet <corbet@....net>,
Robin Murphy <robin.murphy@....com>,
LKML <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
David Gibson <david@...son.dropbear.id.au>,
Kirti Wankhede <kwankhede@...dia.com>,
David Woodhouse <dwmw2@...radead.org>,
Jason Wang <jasowang@...hat.com>
Subject: Re: [RFC] /dev/ioasid uAPI proposal
On Thu, 3 Jun 2021 17:10:18 -0300
Jason Gunthorpe <jgg@...dia.com> wrote:
> On Thu, Jun 03, 2021 at 02:01:46PM -0600, Alex Williamson wrote:
>
> > > > > 1) Mixing IOMMU_CAP_CACHE_COHERENCY and !IOMMU_CAP_CACHE_COHERENCY
> > > > > domains.
> > > > >
> > > > > This doesn't actually matter. If you mix them together then kvm
> > > > > will turn on wbinvd anyhow, so we don't need to use the DMA_PTE_SNP
> > > > > anywhere in this VM.
> > > > >
> > > > > This if two IOMMU's are joined together into a single /dev/ioasid
> > > > > then we can just make them both pretend to be
> > > > > !IOMMU_CAP_CACHE_COHERENCY and both not set IOMMU_CACHE.
> > > >
> > > > Yes and no. Yes, if any domain is !IOMMU_CAP_CACHE_COHERENCY then we
> > > > need to emulate wbinvd, but no we'll use IOMMU_CACHE any time it's
> > > > available based on the per domain support available. That gives us the
> > > > most consistent behavior, ie. we don't have VMs emulating wbinvd
> > > > because they used to have a device attached where the domain required
> > > > it and we can't atomically remap with new flags to perform the same as
> > > > a VM that never had that device attached in the first place.
> > >
> > > I think we are saying the same thing..
> >
> > Hrm? I think I'm saying the opposite of your "both not set
> > IOMMU_CACHE". IOMMU_CACHE is the mapping flag that enables
> > DMA_PTE_SNP. Maybe you're using IOMMU_CACHE as the state reported to
> > KVM?
>
> I'm saying if we enable wbinvd in the guest then no IOASIDs used by
> that guest need to set DMA_PTE_SNP.
Yes
> If we disable wbinvd in the guest
> then all IOASIDs must enforce DMA_PTE_SNP (or we otherwise guarentee
> no-snoop is not possible).
Yes, but we can't get from one of these to the other atomically wrt to
the device DMA.
> This is not what VFIO does today, but it is a reasonable choice.
>
> Based on that observation we can say as soon as the user wants to use
> an IOMMU that does not support DMA_PTE_SNP in the guest we can still
> share the IO page table with IOMMUs that do support DMA_PTE_SNP.
If your goal is to prioritize IO page table sharing, sure. But because
we cannot atomically transition from one to the other, each device is
stuck with the pages tables it has, so the history of the VM becomes a
factor in the performance characteristics.
For example if device {A} is backed by an IOMMU capable of blocking
no-snoop and device {B} is backed by an IOMMU which cannot block
no-snoop, then booting VM1 with {A,B} and later removing device {B}
would result in ongoing wbinvd emulation versus a VM2 only booted with
{A}.
Type1 would use separate IO page tables (domains/ioasids) for these such
that VM1 and VM2 have the same characteristics at the end.
Does this become user defined policy in the IOASID model? There's
quite a mess of exposing sufficient GET_INFO for an IOASID for the user
to know such properties of the IOMMU, plus maybe we need mapping flags
equivalent to IOMMU_CACHE exposed to the user, preventing sharing an
IOASID that could generate IOMMU faults, etc.
> > > It doesn't solve the problem to connect kvm to AP and kvmgt though
> >
> > It does not, we'll probably need a vfio ioctl to gratuitously announce
> > the KVM fd to each device. I think some devices might currently fail
> > their open callback if that linkage isn't already available though, so
> > it's not clear when that should happen, ie. it can't currently be a
> > VFIO_DEVICE ioctl as getting the device fd requires an open, but this
> > proposal requires some availability of the vfio device fd without any
> > setup, so presumably that won't yet call the driver open callback.
> > Maybe that's part of the attach phase now... I'm not sure, it's not
> > clear when the vfio device uAPI starts being available in the process
> > of setting up the ioasid. Thanks,
>
> At a certain point we maybe just have to stick to backward compat, I
> think. Though it is useful to think about green field alternates to
> try to guide the backward compat design..
I think more to drive the replacement design; if we can't figure out
how to do something other than backwards compatibility trickery in the
kernel, it's probably going to bite us. Thanks,
Alex
Powered by blists - more mailing lists