[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fc5cce270dc01d46a6a42f2d268166a0a952fcb3.camel@linux.ibm.com>
Date: Thu, 10 Feb 2022 15:06:35 +0100
From: Niklas Schnelle <schnelle@...ux.ibm.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Matthew Rosato <mjrosato@...ux.ibm.com>,
Alex Williamson <alex.williamson@...hat.com>,
linux-s390@...r.kernel.org, cohuck@...hat.com,
farman@...ux.ibm.com, pmorel@...ux.ibm.com,
borntraeger@...ux.ibm.com, hca@...ux.ibm.com, gor@...ux.ibm.com,
gerald.schaefer@...ux.ibm.com, agordeev@...ux.ibm.com,
frankja@...ux.ibm.com, david@...hat.com, imbrenda@...ux.ibm.com,
vneethv@...ux.ibm.com, oberpar@...ux.ibm.com, freude@...ux.ibm.com,
thuth@...hat.com, pasic@...ux.ibm.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 24/30] vfio-pci/zdev: wire up group notifier
On Thu, 2022-02-10 at 09:01 -0400, Jason Gunthorpe wrote:
> On Thu, Feb 10, 2022 at 12:15:58PM +0100, Niklas Schnelle wrote:
>
> > In a KVM or z/VM guest the guest is informed that IOMMU translations
> > need to be refreshed even for previously invalid IOVAs. With this the
> > guest builds it's IOMMU translation tables as normal but then does a
> > RPCIT for the IOVA range it touched. In the hypervisor we can then
> > simply walk the translation tables, pin the guest pages and map them in
> > the host IOMMU. Prior to this series this happened in QEMU which does
> > the map via vfio-iommu-type1 from user-space. This works and will
> > remain as a fallback. Sadly it is quite slow and has a large impact on
> > performance as we need to do a lot of mapping operations as the DMA API
> > of the guest goes through the virtual IOMMU. This series thus adds the
> > same functionality but as a KVM intercept of RPCIT. Now I think this
> > neatly fits into KVM, we're emulating an instruction after all and most
> > of its work is KVM specific pinning of guest pages. Importantly all
> > other handling like IOMMU domain attachment still goes through vfio-
> > iommu-type1 and we just fast path the map/unmap operations.
>
> So you create an iommu_domain and then hand it over to kvm which then
> does map/unmap operations on it under the covers?
Yes
>
> How does the page pinning work?
The pinning is done directly in the RPCIT interception handler pinning
both the IOMMU tables and the guest pages mapped for DMA.
>
> In the design we are trying to reach I would say this needs to be
> modeled as a special iommu_domain that has this automatic map/unmap
> behavior from following user pages. Creating it would specify the kvm
> and the in-guest base address of the guest's page table.
Makes sense.
> Then the
> magic kernel code you describe can operate on its own domain without
> becoming confused with a normal map/unmap domain.
This sounds like an interesting idea. Looking at
drivers/iommu/s390_iommu.c most of that is pretty trivial domain
handling. I wonder if we could share this by marking the existing
s390_iommu_domain type with kind of a "lent out to KVM" flag. Possibly
by simply having a non-NULL pointer to a struct holding the guest base
address and kvm etc? That way we can share the setup/tear down of the
domain and of host IOMMU tables as well as aperture checks the same
while also being able to keep the IOMMU API from interfering with the
KVM RPCIT intercept and vice versa. I.e. while the domain is under
control of KVM's RPCIT handling we make all IOMMU map/unmap fail.
To me this more direct involvement of IOMMU and KVM on s390x is also a
direct consequence of it using special instructions. Naturally those
instructions can be intercepted or run under hardware accelerated
virtualization.
>
> It is like the HW nested translation other CPUs are doing, but instead
> of HW nested, it is SW nested.
Yes very good analogy. Has any of that nested IOMMU translations work
been merged yet? I know AMD had something in the works and nested
translations have been around for the MMU for a while and are also used
on s390x. We're definitely thinking about HW nested IOMMU translations
too so any design we come up with would be able to deal with that too.
Basically we would then execute RPCIT without leaving the hardware
virtualization mode (SIE). We believe that that would require pinning
all of guest memory though because HW can't really pin pages.
Powered by blists - more mailing lists