[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210921215846.GB327412@nvidia.com>
Date: Tue, 21 Sep 2021 18:58:46 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: Liu Yi L <yi.l.liu@...el.com>, hch@....de, jasowang@...hat.com,
joro@...tes.org, jean-philippe@...aro.org, kevin.tian@...el.com,
parav@...lanox.com, lkml@...ux.net, pbonzini@...hat.com,
lushenming@...wei.com, eric.auger@...hat.com, corbet@....net,
ashok.raj@...el.com, yi.l.liu@...ux.intel.com,
jun.j.tian@...el.com, hao.wu@...el.com, dave.jiang@...el.com,
jacob.jun.pan@...ux.intel.com, kwankhede@...dia.com,
robin.murphy@....com, kvm@...r.kernel.org,
iommu@...ts.linux-foundation.org, dwmw2@...radead.org,
linux-kernel@...r.kernel.org, baolu.lu@...ux.intel.com,
david@...son.dropbear.id.au, nicolinc@...dia.com
Subject: Re: [RFC 05/20] vfio/pci: Register device to /dev/vfio/devices
On Tue, Sep 21, 2021 at 03:09:29PM -0600, Alex Williamson wrote:
> the iommufd uAPI at all. Isn't part of that work to understand how KVM
> will be told about non-coherent devices rather than "meh, skip it in the
> kernel"? Also let's not forget that vfio is not only for KVM.
vfio is not only for KVM, but AFIACT the wbinv stuff is only for
KVM... But yes, I agree this should be sorted out at this stage
> > > When the device is opened via /dev/vfio/devices, vfio-pci should prevent
> > > the user from accessing the assigned device because the device is still
> > > attached to the default domain which may allow user-initiated DMAs to
> > > touch arbitrary place. The user access must be blocked until the device
> > > is later bound to an iommufd (see patch 08). The binding acts as the
> > > contract for putting the device in a security context which ensures user-
> > > initiated DMAs via this device cannot harm the rest of the system.
> > >
> > > This patch introduces a vdev->block_access flag for this purpose. It's set
> > > when the device is opened via /dev/vfio/devices and cleared after binding
> > > to iommufd succeeds. mmap and r/w handlers check this flag to decide whether
> > > user access should be blocked or not.
> >
> > This should not be in vfio_pci.
> >
> > AFAIK there is no condition where a vfio driver can work without being
> > connected to some kind of iommu back end, so the core code should
> > handle this interlock globally. A vfio driver's ops should not be
> > callable until the iommu is connected.
> >
> > The only vfio_pci patch in this series should be adding a new callback
> > op to take in an iommufd and register the pci_device as a iommufd
> > device.
>
> Couldn't the same argument be made that registering a $bus device as an
> iommufd device is a common interface that shouldn't be the
> responsibility of the vfio device driver?
The driver needs enough involvment to signal what kind of IOMMU
connection it wants, eg attaching to a physical device will use the
iofd_attach_device() path, but attaching to a SW page table should use
a different API call. PASID should also be different.
Possibly a good arrangement is to have the core provide some generic
ioctl ops functions 'vfio_all_device_iommufd_bind' that everything
except mdev drivers can use so the code is all duplicated.
> non-group device anything more than a reservation of that device if
> access is withheld until iommu isolation? I also don't really want to
> predict how ioctls might evolve to guess whether only blocking .read,
> .write, and .mmap callbacks are sufficient. Thanks,
This is why I said the entire fops should be blocked in a dummy fops
so the core code the vfio_device FD parked and userspace unable to
access the ops until device attachment and thus IOMMU ioslation is
completed.
Simple and easy to reason about, a parked FD is very similar to a
closed FD.
Jason
Powered by blists - more mailing lists