[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250115083340.GL3146852@unreal>
Date: Wed, 15 Jan 2025 10:33:40 +0200
From: Leon Romanovsky <leon@...nel.org>
To: Robin Murphy <robin.murphy@....com>
Cc: Jens Axboe <axboe@...nel.dk>, Jason Gunthorpe <jgg@...pe.ca>,
Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
Keith Busch <kbusch@...nel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Logan Gunthorpe <logang@...tatee.com>,
Yishai Hadas <yishaih@...dia.com>,
Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>,
Kevin Tian <kevin.tian@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Jérôme Glisse <jglisse@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
linux-rdma@...r.kernel.org, iommu@...ts.linux.dev,
linux-nvme@...ts.infradead.org, linux-pci@...r.kernel.org,
kvm@...r.kernel.org, linux-mm@...ck.org,
Randy Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH v5 07/17] dma-mapping: Implement link/unlink ranges API
On Tue, Jan 14, 2025 at 08:50:35PM +0000, Robin Murphy wrote:
> On 17/12/2024 1:00 pm, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@...dia.com>
> >
> > Introduce new DMA APIs to perform DMA linkage of buffers
> > in layers higher than DMA.
> >
> > In proposed API, the callers will perform the following steps.
> > In map path:
> > if (dma_can_use_iova(...))
> > dma_iova_alloc()
> > for (page in range)
> > dma_iova_link_next(...)
> > dma_iova_sync(...)
> > else
> > /* Fallback to legacy map pages */
> > for (all pages)
> > dma_map_page(...)
> >
> > In unmap path:
> > if (dma_can_use_iova(...))
> > dma_iova_destroy()
> > else
> > for (all pages)
> > dma_unmap_page(...)
> >
> > Reviewed-by: Christoph Hellwig <hch@....de>
> > Signed-off-by: Leon Romanovsky <leonro@...dia.com>
> > ---
> > drivers/iommu/dma-iommu.c | 259 ++++++++++++++++++++++++++++++++++++
> > include/linux/dma-mapping.h | 32 +++++
> > 2 files changed, 291 insertions(+)
<...>
> > +static void iommu_dma_iova_unlink_range_slow(struct device *dev,
> > + dma_addr_t addr, size_t size, enum dma_data_direction dir,
> > + unsigned long attrs)
> > +{
> > + struct iommu_domain *domain = iommu_get_dma_domain(dev);
> > + struct iommu_dma_cookie *cookie = domain->iova_cookie;
> > + struct iova_domain *iovad = &cookie->iovad;
> > + size_t iova_start_pad = iova_offset(iovad, addr);
> > + dma_addr_t end = addr + size;
> > +
> > + do {
> > + phys_addr_t phys;
> > + size_t len;
> > +
> > + phys = iommu_iova_to_phys(domain, addr);
> > + if (WARN_ON(!phys))
> > + continue;
>
> Infinite WARN_ON loop, nice.
No problem, will change it to WARN_ON_ONCE.
>
> > + len = min_t(size_t,
> > + end - addr, iovad->granule - iova_start_pad);
<...>
> > +
> > + swiotlb_tbl_unmap_single(dev, phys, len, dir, attrs);
>
> This is still dumb. For everything other than the first and last granule,
> either it's definitely not in SWIOTLB, or it is (per the unaligned size
> thing above) but then "len" is definitely wrong and SWIOTLB will complain.
Like Christoph said, we tested it with NVMe which uses SWIOTLB path and
despite having a lot of unaligned sizes, it worked without SWIOTLB
complains.
>
> > +
> > + addr += len;
> > + iova_start_pad = 0;
> > + } while (addr < end);
> > +}
> > +
> > +static void __iommu_dma_iova_unlink(struct device *dev,
> > + struct dma_iova_state *state, size_t offset, size_t size,
> > + enum dma_data_direction dir, unsigned long attrs,
> > + bool free_iova)
> > +{
> > + struct iommu_domain *domain = iommu_get_dma_domain(dev);
> > + struct iommu_dma_cookie *cookie = domain->iova_cookie;
> > + struct iova_domain *iovad = &cookie->iovad;
> > + dma_addr_t addr = state->addr + offset;
> > + size_t iova_start_pad = iova_offset(iovad, addr);
> > + struct iommu_iotlb_gather iotlb_gather;
> > + size_t unmapped;
> > +
> > + if ((state->__size & DMA_IOVA_USE_SWIOTLB) ||
> > + (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)))
> > + iommu_dma_iova_unlink_range_slow(dev, addr, size, dir, attrs);
> > +
> > + iommu_iotlb_gather_init(&iotlb_gather);
> > + iotlb_gather.queued = free_iova && READ_ONCE(cookie->fq_domain);
>
> This makes things needlessly hard to follow, just keep the IOVA freeing
> separate. And by that I really mean just have unlink and free, since
> dma_iova_destroy() really doesn't seem worth the extra complexity to save
> one line in one caller...
In initial versions, I didn't implement dma_iova_destroy() and used
unlink->free calls directly. Both Jason and Christoph asked me to
provide dma_iova_destroy(), so we can reuse same iotlb_gather.
Almost all callers (except HMM-like) will use this API call.
Let's keep it.
Thanks
Powered by blists - more mailing lists