[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250314184911.GR1322339@unreal>
Date: Fri, 14 Mar 2025 20:49:11 +0200
From: Leon Romanovsky <leon@...nel.org>
To: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Robin Murphy <robin.murphy@....com>, Christoph Hellwig <hch@....de>,
Jason Gunthorpe <jgg@...pe.ca>, Jens Axboe <axboe@...nel.dk>,
Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Sagi Grimberg <sagi@...mberg.me>, Keith Busch <kbusch@...nel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Logan Gunthorpe <logang@...tatee.com>,
Yishai Hadas <yishaih@...dia.com>,
Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>,
Kevin Tian <kevin.tian@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Jérôme Glisse <jglisse@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
linux-rdma@...r.kernel.org, iommu@...ts.linux.dev,
linux-nvme@...ts.infradead.org, linux-pci@...r.kernel.org,
kvm@...r.kernel.org, linux-mm@...ck.org,
Randy Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH v7 00/17] Provide a new two step DMA mapping API
On Fri, Mar 14, 2025 at 11:52:58AM +0100, Marek Szyprowski wrote:
> On 12.03.2025 20:32, Leon Romanovsky wrote:
> > On Wed, Mar 12, 2025 at 10:28:32AM +0100, Marek Szyprowski wrote:
> >> Hi Robin
> >>
> >> On 28.02.2025 20:54, Robin Murphy wrote:
> >>> On 20/02/2025 12:48 pm, Leon Romanovsky wrote:
> >>>> On Wed, Feb 05, 2025 at 04:40:20PM +0200, Leon Romanovsky wrote:
> >>>>> From: Leon Romanovsky <leonro@...dia.com>
> >>>>>
> >>>>> Changelog:
> >>>>> v7:
> >>>>> * Rebased to v6.14-rc1
> >>>> <...>
> >>>>
> >>>>> Christoph Hellwig (6):
> >>>>> PCI/P2PDMA: Refactor the p2pdma mapping helpers
> >>>>> dma-mapping: move the PCI P2PDMA mapping helpers to pci-p2pdma.h
> >>>>> iommu: generalize the batched sync after map interface
> >>>>> iommu/dma: Factor out a iommu_dma_map_swiotlb helper
> >>>>> dma-mapping: add a dma_need_unmap helper
> >>>>> docs: core-api: document the IOVA-based API
> >>>>>
> >>>>> Leon Romanovsky (11):
> >>>>> iommu: add kernel-doc for iommu_unmap and iommu_unmap_fast
> >>>>> dma-mapping: Provide an interface to allow allocate IOVA
> >>>>> dma-mapping: Implement link/unlink ranges API
> >>>>> mm/hmm: let users to tag specific PFN with DMA mapped bit
> >>>>> mm/hmm: provide generic DMA managing logic
> >>>>> RDMA/umem: Store ODP access mask information in PFN
> >>>>> RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page
> >>>>> linkage
> >>>>> RDMA/umem: Separate implicit ODP initialization from explicit ODP
> >>>>> vfio/mlx5: Explicitly use number of pages instead of allocated
> >>>>> length
> >>>>> vfio/mlx5: Rewrite create mkey flow to allow better code reuse
> >>>>> vfio/mlx5: Enable the DMA link API
> >>>>>
> >>>>> Documentation/core-api/dma-api.rst | 70 ++++
> >>>> drivers/infiniband/core/umem_odp.c | 250 +++++---------
> >>>>> drivers/infiniband/hw/mlx5/mlx5_ib.h | 12 +-
> >>>>> drivers/infiniband/hw/mlx5/odp.c | 65 ++--
> >>>>> drivers/infiniband/hw/mlx5/umr.c | 12 +-
> >>>>> drivers/iommu/dma-iommu.c | 468
> >>>>> +++++++++++++++++++++++----
> >>>>> drivers/iommu/iommu.c | 84 ++---
> >>>>> drivers/pci/p2pdma.c | 38 +--
> >>>>> drivers/vfio/pci/mlx5/cmd.c | 375 +++++++++++----------
> >>>>> drivers/vfio/pci/mlx5/cmd.h | 35 +-
> >>>>> drivers/vfio/pci/mlx5/main.c | 87 +++--
> >>>>> include/linux/dma-map-ops.h | 54 ----
> >>>>> include/linux/dma-mapping.h | 85 +++++
> >>>>> include/linux/hmm-dma.h | 33 ++
> >>>>> include/linux/hmm.h | 21 ++
> >>>>> include/linux/iommu.h | 4 +
> >>>>> include/linux/pci-p2pdma.h | 84 +++++
> >>>>> include/rdma/ib_umem_odp.h | 25 +-
> >>>>> kernel/dma/direct.c | 44 +--
> >>>>> kernel/dma/mapping.c | 18 ++
> >>>>> mm/hmm.c | 264 +++++++++++++--
> >>>>> 21 files changed, 1435 insertions(+), 693 deletions(-)
> >>>>> create mode 100644 include/linux/hmm-dma.h
> >>>> Kind reminder.
> > <...>
> >
> >> Removing the need for scatterlists was advertised as the main goal of
> >> this new API, but it looks that similar effects can be achieved with
> >> just iterating over the pages and calling page-based DMA API directly.
> > Such iteration can't be enough because P2P pages don't have struct pages,
> > so you can't use reliably and efficiently dma_map_page_attrs() call.
> >
> > The only way to do so is to use dma_map_sg_attrs(), which relies on SG
> > (the one that we want to remove) to map P2P pages.
>
> That's something I don't get yet. How P2P pages can be used with
> dma_map_sg_attrs(), but not with dma_map_page_attrs()? Both operate
> internally on struct page pointer.
Yes, and no.
See users of is_pci_p2pdma_page(...) function. In dma_*_sg() APIs, there
is a real check and support for p2p. In dma_map_page_attrs() variants,
this support is missing (ignored, or error is returned).
>
> >> Maybe I missed something. I still see some advantages in this DMA API
> >> extension, but I would also like to see the clear benefits from
> >> introducing it, like perf logs or other benchmark summary.
> > We didn't focus yet on performance, however Christoph mentioned in his
> > block RFC [1] that even simple conversion should improve performance as
> > we are performing one P2P lookup per-bio and not per-SG entry as was
> > before [2]. In addition it decreases memory [3] too.
> >
> > [1] https://lore.kernel.org/all/cover.1730037261.git.leon@kernel.org/
> > [2] https://lore.kernel.org/all/34d44537a65aba6ede215a8ad882aeee028b423a.1730037261.git.leon@kernel.org/
> > [3] https://lore.kernel.org/all/383557d0fa1aa393dbab4e1daec94b6cced384ab.1730037261.git.leon@kernel.org/
> >
> > So clear benefits are:
> > 1. Ability to use native for subsystem structure, e.g. bio for block,
> > umem for RDMA, dmabuf for DRM, e.t.c. It removes current wasteful
> > conversions from and to SG in order to work with DMA API.
> > 2. Batched request and iotlb sync optimizations (perform only once).
> > 3. Avoid very expensive call to pgmap pointer.
> > 4. Expose MMIO over VFIO without hacks (PCI BAR doesn't have struct pages).
> > See this series for such a hack
> > https://lore.kernel.org/all/20250307052248.405803-1-vivek.kasireddy@intel.com/
>
> I see those benefits and I admit that for typical DMA-with-IOMMU case it
> would improve some things. I think that main concern from Robin was how
> to handle it for the cases without an IOMMU.
In such case, we fallback to non-IOMMU flow (old, well-established one).
See this HMM patch as an example https://lore.kernel.org/all/a796da065fa8a9cb35d591ce6930400619572dcc.1738765879.git.leonro@nvidia.com/
+dma_addr_t hmm_dma_map_pfn(struct device *dev, struct hmm_dma_map *map,
+ size_t idx,
+ struct pci_p2pdma_map_state *p2pdma_state)
...
+ if (dma_use_iova(state)) {
...
+ } else {
...
+ dma_addr = dma_map_page(dev, page, 0, map->dma_entry_size,
+ DMA_BIDIRECTIONAL);
Thanks
>
> Best regards
> --
> Marek Szyprowski, PhD
> Samsung R&D Institute Poland
>
Powered by blists - more mailing lists