[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230904153403.GB815284@myrica>
Date: Mon, 4 Sep 2023 16:34:03 +0100
From: Jean-Philippe Brucker <jean-philippe@...aro.org>
To: Niklas Schnelle <schnelle@...ux.ibm.com>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
virtualization@...ts.linux-foundation.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable
deferred flush
On Fri, Aug 25, 2023 at 05:21:26PM +0200, Niklas Schnelle wrote:
> Add ops->flush_iotlb_all operation to enable virtio-iommu for the
> dma-iommu deferred flush scheme. This results inn a significant increase
in
> in performance in exchange for a window in which devices can still
> access previously IOMMU mapped memory. To get back to the prior behavior
> iommu.strict=1 may be set on the kernel command line.
Maybe add that it depends on CONFIG_IOMMU_DEFAULT_DMA_{LAZY,STRICT} as
well, because I've seen kernel configs that enable either.
>
> Link: https://lore.kernel.org/lkml/20230802123612.GA6142@myrica/
> Signed-off-by: Niklas Schnelle <schnelle@...ux.ibm.com>
> ---
> drivers/iommu/virtio-iommu.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index fb73dec5b953..1b7526494490 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -924,6 +924,15 @@ static int viommu_iotlb_sync_map(struct iommu_domain *domain,
> return viommu_sync_req(vdomain->viommu);
> }
>
> +static void viommu_flush_iotlb_all(struct iommu_domain *domain)
> +{
> + struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> + if (!vdomain->nr_endpoints)
> + return;
As for patch 1, a NULL check in viommu_sync_req() would allow dropping
this one
Thanks,
Jean
> + viommu_sync_req(vdomain->viommu);
> +}
> +
> static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> {
> struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> @@ -1049,6 +1058,8 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
> switch (cap) {
> case IOMMU_CAP_CACHE_COHERENCY:
> return true;
> + case IOMMU_CAP_DEFERRED_FLUSH:
> + return true;
> default:
> return false;
> }
> @@ -1069,6 +1080,7 @@ static struct iommu_ops viommu_ops = {
> .map_pages = viommu_map_pages,
> .unmap_pages = viommu_unmap_pages,
> .iova_to_phys = viommu_iova_to_phys,
> + .flush_iotlb_all = viommu_flush_iotlb_all,
> .iotlb_sync = viommu_iotlb_sync,
> .iotlb_sync_map = viommu_iotlb_sync_map,
> .free = viommu_domain_free,
>
> --
> 2.39.2
>
Powered by blists - more mailing lists