[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210222175523.GQ4247@nvidia.com>
Date: Mon, 22 Feb 2021 13:55:23 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Alex Williamson <alex.williamson@...hat.com>
CC: <cohuck@...hat.com>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <peterx@...hat.com>
Subject: Re: [RFC PATCH 10/10] vfio/type1: Register device notifier
On Mon, Feb 22, 2021 at 09:52:32AM -0700, Alex Williamson wrote:
> Introduce a new default strict MMIO mapping mode where the vma for
> a VM_PFNMAP mapping must be backed by a vfio device. This allows
> holding a reference to the device and registering a notifier for the
> device, which additionally keeps the device in an IOMMU context for
> the extent of the DMA mapping. On notification of device release,
> automatically drop the DMA mappings for it.
>
> Signed-off-by: Alex Williamson <alex.williamson@...hat.com>
> drivers/vfio/vfio_iommu_type1.c | 124 +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 123 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index b34ee4b96a4a..2a16257bd5b6 100644
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -61,6 +61,11 @@ module_param_named(dma_entry_limit, dma_entry_limit, uint, 0644);
> MODULE_PARM_DESC(dma_entry_limit,
> "Maximum number of user DMA mappings per container (65535).");
>
> +static bool strict_mmio_maps = true;
> +module_param_named(strict_mmio_maps, strict_mmio_maps, bool, 0644);
> +MODULE_PARM_DESC(strict_mmio_maps,
> + "Restrict to safe DMA mappings of device memory (true).");
I think this should be a kconfig, historically we've required kconfig
to opt-in to unsafe things that could violate kernel security. Someone
building a secure boot trusted kernel system should not have an
options for userspace to just turn off protections.
> +/* Req separate object for async removal from notifier vs dropping vfio_dma */
> +struct pfnmap_obj {
> + struct notifier_block nb;
> + struct work_struct work;
> + struct vfio_iommu *iommu;
> + struct vfio_device *device;
> +};
So this is basically the dmabuf, I think it would be simple enough to
go in here and change it down the road if someone had interest.
> +static void unregister_device_bg(struct work_struct *work)
> +{
> + struct pfnmap_obj *pfnmap = container_of(work, struct pfnmap_obj, work);
> +
> + vfio_device_unregister_notifier(pfnmap->device, &pfnmap->nb);
> + vfio_device_put(pfnmap->device);
The device_put keeps the device from becoming unregistered, but what
happens during the hot reset case? Is this what the cover letter
was talking about? CPU access is revoked but P2P is still possible?
> +static int vfio_device_nb_cb(struct notifier_block *nb,
> + unsigned long action, void *unused)
> +{
> + struct pfnmap_obj *pfnmap = container_of(nb, struct pfnmap_obj, nb);
> +
> + switch (action) {
> + case VFIO_DEVICE_RELEASE:
> + {
> + struct vfio_dma *dma, *dma_last = NULL;
> + int retries = 0;
> +again:
> + mutex_lock(&pfnmap->iommu->lock);
> + dma = pfnmap_find_dma(pfnmap);
Feels a bit strange that the vfio_dma isn't linked to the pfnmap_obj
instead of searching the entire list?
> @@ -549,8 +625,48 @@ static int vaddr_get_pfn(struct vfio_iommu *iommu, struct vfio_dma *dma,
> if (ret == -EAGAIN)
> goto retry;
I'd prefer this was written a bit differently, I would like it very
much if this doesn't mis-use follow_pte() by returning pfn outside
the lock.
vaddr_get_bar_pfn(..)
{
vma = find_vma_intersection(mm, vaddr, vaddr + 1);
if (!vma)
return -ENOENT;
if ((vma->vm_flags & VM_DENYWRITE) && (prot & PROT_WRITE)) // Check me
return -EFAULT;
device = vfio_device_get_from_vma(vma);
if (!device)
return -ENOENT;
/*
* Now do the same as vfio_pci_mmap_fault() - the vm_pgoff must
* be the physical pfn when using this mechanism. Delete follow_pte entirely()
*/
pfn = (vaddr - vma->vm_start)/PAGE_SIZE + vma->vm_pgoff
/* de-dup device and record that we are using device's pages in the
pfnmap */
...
}
This would be significantly better if it could do whole ranges instead
of page at a time.
Jason
Powered by blists - more mailing lists