[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210507110614.7b8e6998@redhat.com>
Date: Fri, 7 May 2021 11:06:14 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: "Tian, Kevin" <kevin.tian@...el.com>
Cc: Jason Gunthorpe <jgg@...dia.com>, "Liu, Yi L" <yi.l.liu@...el.com>,
"Jacob Pan" <jacob.jun.pan@...ux.intel.com>,
Auger Eric <eric.auger@...hat.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
Joerg Roedel <joro@...tes.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
David Woodhouse <dwmw2@...radead.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
Tejun Heo <tj@...nel.org>, Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
Jean-Philippe Brucker <jean-philippe@...aro.com>,
Jonathan Corbet <corbet@....net>,
"Raj, Ashok" <ashok.raj@...el.com>, "Wu, Hao" <hao.wu@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>
Subject: Re: [PATCH V4 05/18] iommu/ioasid: Redefine IOASID set and
allocation APIs
On Fri, 7 May 2021 07:36:49 +0000
"Tian, Kevin" <kevin.tian@...el.com> wrote:
> > From: Alex Williamson <alex.williamson@...hat.com>
> > Sent: Wednesday, April 28, 2021 11:06 PM
> >
> > On Wed, 28 Apr 2021 06:34:11 +0000
> > "Tian, Kevin" <kevin.tian@...el.com> wrote:
> > >
> > > Can you or Alex elaborate where the complexity and performance problem
> > > locate in VFIO map/umap? We'd like to understand more detail and see
> > how
> > > to avoid it in the new interface.
> >
> >
> > The map/unmap interface is really only good for long lived mappings,
> > the overhead is too high for things like vIOMMU use cases or any case
> > where the mapping is intended to be dynamic. Userspace drivers must
> > make use of a long lived buffer mapping in order to achieve performance.
>
> This is not a limitation of VFIO map/unmap. It's the limitation of any
> map/unmap semantics since the fact of long-lived vs. short-lived is
> imposed by userspace. Nested translation is the only viable optimization
> allowing 2nd-level to be a long-lived mapping even w/ vIOMMU. From
> this angle I'm not sure how a new map/unmap implementation could
> address this perf limitation alone.
Sure, we don't need to try to tackle every problem at once, a map/unmap
interface compatible with what we have is a good place to start and
nested translation may provide the high performance option. That's not
to say that we couldn't, in the future, extend the map/unmap with memory
pre-registration like done in the spapr IOMMU to see how that could
reduce latency.
> > The mapping and unmapping granularity has been a problem as well,
> > type1v1 allowed arbitrary unmaps to bisect the original mapping, with
> > the massive caveat that the caller relies on the return value of the
> > unmap to determine what was actually unmapped because the IOMMU use
> > of
> > superpages is transparent to the caller. This led to type1v2 that
> > simply restricts the user to avoid ever bisecting mappings. That still
> > leaves us with problems for things like virtio-mem support where we
> > need to create initial mappings with a granularity that allows us to
> > later remove entries, which can prevent effective use of IOMMU
> > superpages.
>
> We could start with a semantics similar to type1v2.
>
> btw why does virtio-mem require a smaller granularity? Can we split
> superpages in-the-fly when removal actually happens (just similar
> to page split in VM live migration for efficient dirty page tracking)?
The IOMMU API doesn't currently support those semantics. If the IOMMU
used a superpage, then a superpage gets unmapped, it doesn't get
atomically broken down into smaller pages. Therefore virtio-mem
proposes a fixed mapping granularity to allow for that same unmapping
granularity.
> and isn't it another problem imposed by userspace? How could a new
> map/unmap implementation mitigate this problem if the userspace
> insists on a smaller granularity for initial mappings?
Currently if userspace wants to guarantee unmap granularity, they need
to make the same restriction themselves on the mapping granularity.
For instance, userspace cannot currently map a 1GB IOVA range while
guaranteeing 2MB unmap granularity of that range with a single ioctl.
Instead userspace would need to make 512, 2MB mappings calls.
> > Locked page accounting has been another constant issue. We perform
> > locked page accounting at the container level, where each container
> > accounts independently. A user may require multiple containers, the
> > containers may pin the same physical memory, but be accounted against
> > the user once per container.
>
> for /dev/ioasid there is still an open whether an process is allowed to
> open /dev/ioasid once or multiple times. If there is only one ioasid_fd
> per process, the accounting can be made accurately. otherwise the
> same problem still exists as each ioasid_fd is akin to the container, then
> we need find a better solution.
We had tossed around an idea of a super-container with vfio, it's maybe
something we'd want to incorporate into this design. For instance, if
memory could be pre-registered with a super container, which would
handle the locked memory accounting for that memory, then
sub-containers could all handle the IOMMU context of their sets of
devices relative to that common memory pool.
> > Those are the main ones I can think of. It is nice to have a simple
> > map/unmap interface, I'd hope that a new /dev/ioasid interface wouldn't
> > raise the barrier to entry too high, but the user needs to have the
> > ability to have more control of their mappings and locked page
> > accounting should probably be offloaded somewhere. Thanks,
> >
>
> Based on your feedbacks I feel it's probably reasonable to start with
> a type1v2 semantics for the new interface. Locked accounting could
> also start with the same VFIO restriction and then improve it
> incrementally, if a cleaner way is intrusive (if not affecting uAPI).
> But I didn't get the suggestion on "more control of their mappings".
> Can you elaborate?
Things like I note above, userspace cannot currently specify mapping
granularity nor has any visibility to the granularity they get from the
IOMMU. What actually happens in the IOMMU is pretty opaque to the user
currently. Thanks,
Alex
Powered by blists - more mailing lists