[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241118145929.GB27795@willie-the-truck>
Date: Mon, 18 Nov 2024 14:59:30 +0000
From: Will Deacon <will@...nel.org>
To: Leon Romanovsky <leon@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, Jason Gunthorpe <jgg@...pe.ca>,
Robin Murphy <robin.murphy@....com>, Joerg Roedel <joro@...tes.org>,
Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
Leon Romanovsky <leonro@...dia.com>,
Keith Busch <kbusch@...nel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Logan Gunthorpe <logang@...tatee.com>,
Yishai Hadas <yishaih@...dia.com>,
Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>,
Kevin Tian <kevin.tian@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Jérôme Glisse <jglisse@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
linux-rdma@...r.kernel.org, iommu@...ts.linux.dev,
linux-nvme@...ts.infradead.org, linux-pci@...r.kernel.org,
kvm@...r.kernel.org, linux-mm@...ck.org,
Randy Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH v3 07/17] dma-mapping: Implement link/unlink ranges API
On Sun, Nov 10, 2024 at 03:46:54PM +0200, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@...dia.com>
>
> Introduce new DMA APIs to perform DMA linkage of buffers
> in layers higher than DMA.
>
> In proposed API, the callers will perform the following steps.
> In map path:
> if (dma_can_use_iova(...))
> dma_iova_alloc()
> for (page in range)
> dma_iova_link_next(...)
> dma_iova_sync(...)
> else
> /* Fallback to legacy map pages */
> for (all pages)
> dma_map_page(...)
>
> In unmap path:
> if (dma_can_use_iova(...))
> dma_iova_destroy()
> else
> for (all pages)
> dma_unmap_page(...)
>
> Signed-off-by: Leon Romanovsky <leonro@...dia.com>
> ---
> drivers/iommu/dma-iommu.c | 259 ++++++++++++++++++++++++++++++++++++
> include/linux/dma-mapping.h | 32 +++++
> 2 files changed, 291 insertions(+)
[...]
> +/**
> + * dma_iova_link - Link a range of IOVA space
> + * @dev: DMA device
> + * @state: IOVA state
> + * @phys: physical address to link
> + * @offset: offset into the IOVA state to map into
> + * @size: size of the buffer
> + * @dir: DMA direction
> + * @attrs: attributes of mapping properties
> + *
> + * Link a range of IOVA space for the given IOVA state without IOTLB sync.
> + * This function is used to link multiple physical addresses in contigueous
> + * IOVA space without performing costly IOTLB sync.
> + *
> + * The caller is responsible to call to dma_iova_sync() to sync IOTLB at
> + * the end of linkage.
> + */
> +int dma_iova_link(struct device *dev, struct dma_iova_state *state,
> + phys_addr_t phys, size_t offset, size_t size,
> + enum dma_data_direction dir, unsigned long attrs)
> +{
> + struct iommu_domain *domain = iommu_get_dma_domain(dev);
> + struct iommu_dma_cookie *cookie = domain->iova_cookie;
> + struct iova_domain *iovad = &cookie->iovad;
> + size_t iova_start_pad = iova_offset(iovad, phys);
> +
> + if (WARN_ON_ONCE(iova_start_pad && offset > 0))
> + return -EIO;
> +
> + if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size))
> + return iommu_dma_iova_link_swiotlb(dev, state, phys, offset,
> + size, dir, attrs);
> +
> + return __dma_iova_link(dev, state->addr + offset - iova_start_pad,
> + phys - iova_start_pad,
> + iova_align(iovad, size + iova_start_pad), dir, attrs);
> +}
> +EXPORT_SYMBOL_GPL(dma_iova_link);
> +
> +/**
> + * dma_iova_sync - Sync IOTLB
> + * @dev: DMA device
> + * @state: IOVA state
> + * @offset: offset into the IOVA state to sync
> + * @size: size of the buffer
> + *
> + * Sync IOTLB for the given IOVA state. This function should be called on
> + * the IOVA-contigous range created by one ore more dma_iova_link() calls
> + * to sync the IOTLB.
> + */
> +int dma_iova_sync(struct device *dev, struct dma_iova_state *state,
> + size_t offset, size_t size)
> +{
> + struct iommu_domain *domain = iommu_get_dma_domain(dev);
> + struct iommu_dma_cookie *cookie = domain->iova_cookie;
> + struct iova_domain *iovad = &cookie->iovad;
> + dma_addr_t addr = state->addr + offset;
> + size_t iova_start_pad = iova_offset(iovad, addr);
> +
> + return iommu_sync_map(domain, addr - iova_start_pad,
> + iova_align(iovad, size + iova_start_pad));
> +}
> +EXPORT_SYMBOL_GPL(dma_iova_sync);
> +
> +static void iommu_dma_iova_unlink_range_slow(struct device *dev,
> + dma_addr_t addr, size_t size, enum dma_data_direction dir,
> + unsigned long attrs)
> +{
> + struct iommu_domain *domain = iommu_get_dma_domain(dev);
> + struct iommu_dma_cookie *cookie = domain->iova_cookie;
> + struct iova_domain *iovad = &cookie->iovad;
> + size_t iova_start_pad = iova_offset(iovad, addr);
> + dma_addr_t end = addr + size;
> +
> + do {
> + phys_addr_t phys;
> + size_t len;
> +
> + phys = iommu_iova_to_phys(domain, addr);
> + if (WARN_ON(!phys))
> + continue;
> + len = min_t(size_t,
> + end - addr, iovad->granule - iova_start_pad);
> +
> + if (!dev_is_dma_coherent(dev) &&
> + !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> + arch_sync_dma_for_cpu(phys, len, dir);
> +
> + swiotlb_tbl_unmap_single(dev, phys, len, dir, attrs);
> +
> + addr += len;
> + iova_start_pad = 0;
> + } while (addr < end);
> +}
> +
> +static void __iommu_dma_iova_unlink(struct device *dev,
> + struct dma_iova_state *state, size_t offset, size_t size,
> + enum dma_data_direction dir, unsigned long attrs,
> + bool free_iova)
> +{
> + struct iommu_domain *domain = iommu_get_dma_domain(dev);
> + struct iommu_dma_cookie *cookie = domain->iova_cookie;
> + struct iova_domain *iovad = &cookie->iovad;
> + dma_addr_t addr = state->addr + offset;
> + size_t iova_start_pad = iova_offset(iovad, addr);
> + struct iommu_iotlb_gather iotlb_gather;
> + size_t unmapped;
> +
> + if ((state->__size & DMA_IOVA_USE_SWIOTLB) ||
> + (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)))
> + iommu_dma_iova_unlink_range_slow(dev, addr, size, dir, attrs);
> +
> + iommu_iotlb_gather_init(&iotlb_gather);
> + iotlb_gather.queued = free_iova && READ_ONCE(cookie->fq_domain);
> +
> + size = iova_align(iovad, size + iova_start_pad);
> + addr -= iova_start_pad;
> + unmapped = iommu_unmap_fast(domain, addr, size, &iotlb_gather);
> + WARN_ON(unmapped != size);
Does the new API require that the 'size' passed to dma_iova_unlink()
exactly match the 'size' passed to the corresponding call to
dma_iova_link()? I ask because the IOMMU page-table code is built around
the assumption that partial unmap() operations never occur (i.e.
operations which could require splitting a huge mapping). We just
removed [1] that code from the Arm IO page-table implementations, so it
would be good to avoid adding it back for this.
Will
[1] https://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux.git/commit/?h=arm/smmu&id=33729a5fc0caf7a97d20507acbeee6b012e7e519
Powered by blists - more mailing lists