[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210421133312.15307c44@redhat.com>
Date: Wed, 21 Apr 2021 13:33:12 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: "Liu, Yi L" <yi.l.liu@...el.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
Auger Eric <eric.auger@...hat.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
"Tian, Kevin" <kevin.tian@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
Joerg Roedel <joro@...tes.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
David Woodhouse <dwmw2@...radead.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
Tejun Heo <tj@...nel.org>, Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
Jean-Philippe Brucker <jean-philippe@...aro.com>,
Jonathan Corbet <corbet@....net>,
"Raj, Ashok" <ashok.raj@...el.com>, "Wu, Hao" <hao.wu@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>
Subject: Re: [PATCH V4 05/18] iommu/ioasid: Redefine IOASID set and
allocation APIs
On Wed, 21 Apr 2021 14:52:03 -0300
Jason Gunthorpe <jgg@...dia.com> wrote:
> On Wed, Apr 21, 2021 at 10:54:51AM -0600, Alex Williamson wrote:
>
> > That's essentially replacing vfio-core, where I think we're more
>
> I am only talking about /dev/vfio here which is basically the IOMMU
> interface part.
>
> I still expect that VFIO_GROUP_SET_CONTAINER will be used to connect
> /dev/{ioasid,vfio} to the VFIO group and all the group and device
> logic stays inside VFIO.
But that group and device logic is also tied to the container, where
the IOMMU backend is the interchangeable thing that provides the IOMMU
manipulation for that container. If you're using
VFIO_GROUP_SET_CONTAINER to associate a group to a /dev/ioasid, then
you're really either taking that group outside of vfio or you're
re-implementing group management in /dev/ioasid. I'd expect the
transition point at VFIO_SET_IOMMU.
> The appeal of unifying /dev/{ioasid,vfio} to a single fops is that it
> cleans up vfio a lot - we don't have to have two different code paths
> where one handles a vfio_container and the other a ioasid_container
> and the all the related different iommu ops and so on.
Currently vfio IOMMU backends don't know about containers either.
Setting the vfio IOMMU for a container creates an object within the
IOMMU backend representing that IOMMU context. IOMMU groups are then
attached to that context, where the IOMMU backend can add to or create a
new IOMMU domain to include that group, or if no compatible IOMMU
context can be created, reject it.
> Everything can be switched to ioasid_container all down the line. If
> it wasn't for PPC this looks fairly simple.
At what point is it no longer vfio? I'd venture to say that replacing
the container rather than invoking a different IOMMU backend is that
point.
> Since getting rid of PPC looks a bit hard, we'd be stuck with
> accepting a /dev/ioasid and then immediately wrappering it in a
> vfio_container an shimming it through a vfio_iommu_ops. It is not
> ideal at all, but in my look around I don't see a major problem if
> type1 implementation is moved to live under /dev/ioasid.
But type1 is \just\ an IOMMU backend, not "/dev/vfio". Given that
nobody flinched at removing NVLink support, maybe just deprecate SPAPR
now and see if anyone objects ;)
> For concreteness if we look at the set container flow with ioasid I'd
> say something like:
>
> vfio_group_fops_unl_ioctl()
> VFIO_GROUP_SET_CONTAINER
> vfio_group_set_container()
> if (f.file->f_op == &vfio_fops) {
> // Use a real vfio_container and vfio_iommu_driver
> driver->ops->attach_group()
> tce_iommu_attach_group()
> }
>
> if (ioasid_container = ioasid_get_from_fd(container_fd)) {
> // create a dummy vfio_container and use the ioasid driver
> container = kzalloc()
> container->iommu_driver = ioasid_shim
> driver->ops->attach_group()
> ioasid_shim_attach_group(ioasid_container, ...)
> ioasid_attach_group()
> // What used to be vfio_iommu_attach_group()
How do you handle multiple groups with the same container? Again, I'd
expect some augmentation of VFIO_SET_IOMMU so that /dev/vfio continues
to exist and manage group to container mapping and /dev/ioasid manages
the IOMMU context of that container.
>
> Broadly all the ops vfio need go through the ioasid_shim which relays
> them to the generic ioasid API.
/dev/vfio essentially already passes through all fops to the IOMMU
backend once the VFIO_SET_IOMMU is established.
> We end up with a ioasid.h that basically has the vfio_iommu_type1 code
> lightly recast into some 'struct iommu_container' and a set of
> ioasid_* function entry points that follow vfio_iommu_driver_ops_type1:
> ioasid_attach_group
> ioasid_detatch_group
> ioasid_<something about user pages>
> ioasid_read/ioasid_write
Again, this looks like a vfio IOMMU backend. What are we accomplishing
by replacing /dev/vfio with /dev/ioasid versus some manipulation of
VFIO_SET_IOMMU accepting a /dev/ioasid fd?
> If we have this, and /dev/ioasid implements the legacy IOCTLs, then
> /dev/vfio == /dev/ioasid and we can compile out vfio_fops and related
> from vfio.c and tell ioasid.c to create /dev/vfio instead using the
> ops it owns.
Why would we want /dev/ioasid to implement legacy ioctls instead of
simply implementing an interface to allow /dev/ioasid to be used as a
vfio IOMMU backend?
> This is a very long winded way of saying ideally we'd do
> approximately:
> git mv drivers/vfio/vfio_iommu_type1.c drivers/ioasid/ioasid.c
>
> As the first step. Essentially we declare that what is type1 is really
> the user interface to the internal kernel IOMMU kAPI, which has been
> steadily evolving since type1 was created 10 years ago.
The pseudo code above really suggests you do want to remove
/dev/vfio/vfio, but this is only one of the IOMMU backends for vfio, so
I can't quite figure out if we're talking past each other.
As I expressed in another thread, type1 has a lot of shortcomings. The
mapping interface leaves userspace trying desperately to use statically
mapped buffers because the map/unmap latency is too high. We have
horrible issues with duplicate locked page accounting across
containers. It suffers pretty hard from feature creep in various
areas. A new IOMMU backend is an opportunity to redesign some of these
things.
> > The interface of making that selection might change to accept an
> > external /dev/ioasid file descriptor, of course. Maybe you can
> > elaborate on how the vfio device and group uAPI live (or not) in
> > this new scheme were /dev/ioasid is the primary interface. Thanks,
>
> They say in vfio. You'd still open a group and you'd still pass in
> either /dev/vfio or /dev/ioasid to define the container
>
> Though, completely as an unrelated aside, I admit to not entirely
> understanding why the group is the central element of the uAPI.
>
> It is weird that the vfio "drivers" all work on the struct vfio_device
> (at least after my series), and it has a file_operations presence via
> vfio_device_fops, but instead of struct vfio_device directly having a
> 'struct device' and cdev to access the FD we get it through a group FD
> and agroup chardev via VFIO_GROUP_GET_DEVICE_FD
>
> If we were to revise this, and I don't see a huge reason to do so, I
> would put a struct device and cdev in struct vfio_device, attach the
> vfio_device directly to the ioasid and then forget about the group, at
> least as uapi, completely.
>
> Or at least I don't see where that gets into trouble, but I'm not too
> familiar with the multi-vfio in a process scenario..
The vfio_group is the unit of userspace ownership as it reflects the
IOMMU group as the unit of isolation. Ideally there's a 1:1 mapping
between device and group, but that is of course not always the case.
The IOMMU group also abstracts isolation and visibility relative to
DMA. For example, in a PCIe topology a multi-function device may not
have isolation between functions, but each requester ID is visible to
the IOMMU. This lacks isolation but not IOMMU granularity, or
visibility. A conventional PCI topology however lacks both isolation
and visibility, all devices downstream use either the PCIe-to-PCI
bridge RID or a RID derived from the secondary bus. We can also have
mixed topologies, for example PCIe-to-PCI<->PCI-to-PCIe, where the
grouping code needs to search upstream for the highest level where we
achieve both isolation and visibility.
To simplify this, we use the group as the unit of IOMMU context, again
favoring singleton group behavior.
An individual vfio_device doesn't know about these isolation
dependencies, thus while a vfio bus/device driver like vfio-pci can
expose a device, it's vfio-core than manages whether the isolated set
of devices which includes that device, ie. the group, meets the
requirements for userspace access. Thanks,
Alex
Powered by blists - more mailing lists