[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13cf51210d125d48a47d55d9c6a20c93f5a2b78b.camel@linux.ibm.com>
Date: Thu, 10 Feb 2022 12:15:58 +0100
From: Niklas Schnelle <schnelle@...ux.ibm.com>
To: Jason Gunthorpe <jgg@...dia.com>,
Matthew Rosato <mjrosato@...ux.ibm.com>
Cc: Alex Williamson <alex.williamson@...hat.com>,
linux-s390@...r.kernel.org, cohuck@...hat.com,
farman@...ux.ibm.com, pmorel@...ux.ibm.com,
borntraeger@...ux.ibm.com, hca@...ux.ibm.com, gor@...ux.ibm.com,
gerald.schaefer@...ux.ibm.com, agordeev@...ux.ibm.com,
frankja@...ux.ibm.com, david@...hat.com, imbrenda@...ux.ibm.com,
vneethv@...ux.ibm.com, oberpar@...ux.ibm.com, freude@...ux.ibm.com,
thuth@...hat.com, pasic@...ux.ibm.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 24/30] vfio-pci/zdev: wire up group notifier
On Tue, 2022-02-08 at 16:40 -0400, Jason Gunthorpe wrote:
> On Tue, Feb 08, 2022 at 03:33:58PM -0500, Matthew Rosato wrote:
>
> > > Is the purpose of IOAT to associate the device to a set of KVM page
> > > tables? That seems like a container or future iommufd operation. I
> >
> > Yes, here we are establishing a relationship with the DMA table in the guest
> > so that once mappings are established guest PCI operations (handled via
> > special instructions in s390) don't need to go through the host but can be
> > directly handled by firmware (so, effectively guest can keep running on its
> > vcpu vs breaking out).
>
> Oh, well, certainly sounds like a NAK on that - anything to do with
> the DMA translation of a PCI device must go through the iommu layer,
> not here.
>
> Lets not repeat the iommu subsytem bypass mess power made please.
Maybe some context on all of this. First it's important to note that on
s390x the PCI IOMMU hardware is controlled with special instructions.
For pass-through this is actually quite nice as it makes it relatively
simple for us to always run with an IOMMU in the guest we simply need
to provide the instructions. Meaning we get full IOMMU protection for
pass-through devices on KVM guests, guests with pass-through remain
pageable and we can even support nested pass-through.
This is possible with relatively little overhead because we can do all
of the per map/unmap guest IOMMU operations with a single instruction
intercept. The instruction we need to intercept is called Refresh PCI
Translations (RPCIT). It's job is twofold.
For an OS running directly on our machine hypervisor LPAR it flushes
the IOMMU's TLB by informing it which pages have been invalidated while
the hardware walks the page tables and fills the TLB on it's own for
establishing a mapping for previously invalid IOVAs.
In a KVM or z/VM guest the guest is informed that IOMMU translations
need to be refreshed even for previously invalid IOVAs. With this the
guest builds it's IOMMU translation tables as normal but then does a
RPCIT for the IOVA range it touched. In the hypervisor we can then
simply walk the translation tables, pin the guest pages and map them in
the host IOMMU. Prior to this series this happened in QEMU which does
the map via vfio-iommu-type1 from user-space. This works and will
remain as a fallback. Sadly it is quite slow and has a large impact on
performance as we need to do a lot of mapping operations as the DMA API
of the guest goes through the virtual IOMMU. This series thus adds the
same functionality but as a KVM intercept of RPCIT. Now I think this
neatly fits into KVM, we're emulating an instruction after all and most
of its work is KVM specific pinning of guest pages. Importantly all
other handling like IOMMU domain attachment still goes through vfio-
iommu-type1 and we just fast path the map/unmap operations.
In the code the map/unmap boils down to dma_walk_cpu_trans() and parts
of dma_shadow_cpu_trans() both called in dma_table_shadow(). The former
is a function already shared between our DMA API and IOMMU API
implementations and the only code that walks the host translation
tables. So in a way we're side stepping the IOMMU API ops that is true
but we do not side step the IOMMU host table access code paths. Notice
how our IOMMU API is also < 400 LOC because both the DMA and IOMMU APIs
share code.
That said, I believe we should be able to do the mapping still in a KVM
RPCIT intercept but going through IOMMU API ops if this side stepping
is truly unacceptable. It definitely adds overhead though and I'm not
sure what we gain in clarity or maintainability since we already share
the actual host table access code and there is only one PCI IOMMU and
that is part of the architecture. Also either KVM or QEMU needs to know
about the same details for looking at guest IOMMU translation tables /
emulating the guest IOMMU. It's also clear that the IOMMU API will
remain functional on its own as it is necesssary for any non-KVM use
case which of course can't intercept RPCIT but on the other hand can
also keep mappings much longer signficantly reducing overhead.
Powered by blists - more mailing lists