[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200820154853.21b660d2@x1.home>
Date: Thu, 20 Aug 2020 15:48:53 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: Liu Yi L <yi.l.liu@...el.com>
Cc: eric.auger@...hat.com, baolu.lu@...ux.intel.com, joro@...tes.org,
kevin.tian@...el.com, jacob.jun.pan@...ux.intel.com,
ashok.raj@...el.com, jun.j.tian@...el.com, yi.y.sun@...el.com,
jean-philippe@...aro.org, peterx@...hat.com, hao.wu@...el.com,
stefanha@...il.com, iommu@...ts.linux-foundation.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 12/15] vfio/type1: Add vSVA support for IOMMU-backed
mdevs
On Mon, 27 Jul 2020 23:27:41 -0700
Liu Yi L <yi.l.liu@...el.com> wrote:
> Recent years, mediated device pass-through framework (e.g. vfio-mdev)
> is used to achieve flexible device sharing across domains (e.g. VMs).
> Also there are hardware assisted mediated pass-through solutions from
> platform vendors. e.g. Intel VT-d scalable mode which supports Intel
> Scalable I/O Virtualization technology. Such mdevs are called IOMMU-
> backed mdevs as there are IOMMU enforced DMA isolation for such mdevs.
> In kernel, IOMMU-backed mdevs are exposed to IOMMU layer by aux-domain
Or a physical IOMMU backing device.
> concept, which means mdevs are protected by an iommu domain which is
> auxiliary to the domain that the kernel driver primarily uses for DMA
> API. Details can be found in the KVM presentation as below:
>
> https://events19.linuxfoundation.org/wp-content/uploads/2017/12/\
> Hardware-Assisted-Mediated-Pass-Through-with-VFIO-Kevin-Tian-Intel.pdf
I think letting the line exceed 80 columns is preferable so that it's
clickable. Thanks,
Alex
> This patch extends NESTING_IOMMU ops to IOMMU-backed mdev devices. The
> main requirement is to use the auxiliary domain associated with mdev.
>
> Cc: Kevin Tian <kevin.tian@...el.com>
> CC: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> CC: Jun Tian <jun.j.tian@...el.com>
> Cc: Alex Williamson <alex.williamson@...hat.com>
> Cc: Eric Auger <eric.auger@...hat.com>
> Cc: Jean-Philippe Brucker <jean-philippe@...aro.org>
> Cc: Joerg Roedel <joro@...tes.org>
> Cc: Lu Baolu <baolu.lu@...ux.intel.com>
> Reviewed-by: Eric Auger <eric.auger@...hat.com>
> Signed-off-by: Liu Yi L <yi.l.liu@...el.com>
> ---
> v5 -> v6:
> *) add review-by from Eric Auger.
>
> v1 -> v2:
> *) check the iommu_device to ensure the handling mdev is IOMMU-backed
> ---
> drivers/vfio/vfio_iommu_type1.c | 40 ++++++++++++++++++++++++++++++++++++----
> 1 file changed, 36 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index bf95a0f..9d8f252 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -2379,20 +2379,41 @@ static int vfio_iommu_resv_refresh(struct vfio_iommu *iommu,
> return ret;
> }
>
> +static struct device *vfio_get_iommu_device(struct vfio_group *group,
> + struct device *dev)
> +{
> + if (group->mdev_group)
> + return vfio_mdev_get_iommu_device(dev);
> + else
> + return dev;
> +}
> +
> static int vfio_dev_bind_gpasid_fn(struct device *dev, void *data)
> {
> struct domain_capsule *dc = (struct domain_capsule *)data;
> unsigned long arg = *(unsigned long *)dc->data;
> + struct device *iommu_device;
> +
> + iommu_device = vfio_get_iommu_device(dc->group, dev);
> + if (!iommu_device)
> + return -EINVAL;
>
> - return iommu_uapi_sva_bind_gpasid(dc->domain, dev, (void __user *)arg);
> + return iommu_uapi_sva_bind_gpasid(dc->domain, iommu_device,
> + (void __user *)arg);
> }
>
> static int vfio_dev_unbind_gpasid_fn(struct device *dev, void *data)
> {
> struct domain_capsule *dc = (struct domain_capsule *)data;
> unsigned long arg = *(unsigned long *)dc->data;
> + struct device *iommu_device;
>
> - iommu_uapi_sva_unbind_gpasid(dc->domain, dev, (void __user *)arg);
> + iommu_device = vfio_get_iommu_device(dc->group, dev);
> + if (!iommu_device)
> + return -EINVAL;
> +
> + iommu_uapi_sva_unbind_gpasid(dc->domain, iommu_device,
> + (void __user *)arg);
> return 0;
> }
>
> @@ -2401,8 +2422,13 @@ static int __vfio_dev_unbind_gpasid_fn(struct device *dev, void *data)
> struct domain_capsule *dc = (struct domain_capsule *)data;
> struct iommu_gpasid_bind_data *unbind_data =
> (struct iommu_gpasid_bind_data *)dc->data;
> + struct device *iommu_device;
> +
> + iommu_device = vfio_get_iommu_device(dc->group, dev);
> + if (!iommu_device)
> + return -EINVAL;
>
> - iommu_sva_unbind_gpasid(dc->domain, dev, unbind_data);
> + iommu_sva_unbind_gpasid(dc->domain, iommu_device, unbind_data);
> return 0;
> }
>
> @@ -3060,8 +3086,14 @@ static int vfio_dev_cache_invalidate_fn(struct device *dev, void *data)
> {
> struct domain_capsule *dc = (struct domain_capsule *)data;
> unsigned long arg = *(unsigned long *)dc->data;
> + struct device *iommu_device;
> +
> + iommu_device = vfio_get_iommu_device(dc->group, dev);
> + if (!iommu_device)
> + return -EINVAL;
>
> - iommu_uapi_cache_invalidate(dc->domain, dev, (void __user *)arg);
> + iommu_uapi_cache_invalidate(dc->domain, iommu_device,
> + (void __user *)arg);
> return 0;
> }
>
Powered by blists - more mailing lists