[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d408b1c7-eabf-4a1e-861c-b2ddf8bf9f0e@samsung.com>
Date: Wed, 12 Mar 2025 10:28:32 +0100
From: Marek Szyprowski <m.szyprowski@...sung.com>
To: Robin Murphy <robin.murphy@....com>, Leon Romanovsky <leon@...nel.org>,
Christoph Hellwig <hch@....de>, Jason Gunthorpe <jgg@...pe.ca>
Cc: Jens Axboe <axboe@...nel.dk>, Joerg Roedel <joro@...tes.org>, Will
Deacon <will@...nel.org>, Sagi Grimberg <sagi@...mberg.me>, Keith Busch
<kbusch@...nel.org>, Bjorn Helgaas <bhelgaas@...gle.com>, Logan Gunthorpe
<logang@...tatee.com>, Yishai Hadas <yishaih@...dia.com>, Shameer Kolothum
<shameerali.kolothum.thodi@...wei.com>, Kevin Tian <kevin.tian@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Jérôme Glisse <jglisse@...hat.com>, Andrew Morton
<akpm@...ux-foundation.org>, Jonathan Corbet <corbet@....net>,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org, linux-rdma@...r.kernel.org,
iommu@...ts.linux.dev, linux-nvme@...ts.infradead.org,
linux-pci@...r.kernel.org, kvm@...r.kernel.org, linux-mm@...ck.org, Randy
Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH v7 00/17] Provide a new two step DMA mapping API
Hi Robin
On 28.02.2025 20:54, Robin Murphy wrote:
> On 20/02/2025 12:48 pm, Leon Romanovsky wrote:
>> On Wed, Feb 05, 2025 at 04:40:20PM +0200, Leon Romanovsky wrote:
>>> From: Leon Romanovsky <leonro@...dia.com>
>>>
>>> Changelog:
>>> v7:
>>> * Rebased to v6.14-rc1
>>
>> <...>
>>
>>> Christoph Hellwig (6):
>>> PCI/P2PDMA: Refactor the p2pdma mapping helpers
>>> dma-mapping: move the PCI P2PDMA mapping helpers to pci-p2pdma.h
>>> iommu: generalize the batched sync after map interface
>>> iommu/dma: Factor out a iommu_dma_map_swiotlb helper
>>> dma-mapping: add a dma_need_unmap helper
>>> docs: core-api: document the IOVA-based API
>>>
>>> Leon Romanovsky (11):
>>> iommu: add kernel-doc for iommu_unmap and iommu_unmap_fast
>>> dma-mapping: Provide an interface to allow allocate IOVA
>>> dma-mapping: Implement link/unlink ranges API
>>> mm/hmm: let users to tag specific PFN with DMA mapped bit
>>> mm/hmm: provide generic DMA managing logic
>>> RDMA/umem: Store ODP access mask information in PFN
>>> RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page
>>> linkage
>>> RDMA/umem: Separate implicit ODP initialization from explicit ODP
>>> vfio/mlx5: Explicitly use number of pages instead of allocated
>>> length
>>> vfio/mlx5: Rewrite create mkey flow to allow better code reuse
>>> vfio/mlx5: Enable the DMA link API
>>>
>>> Documentation/core-api/dma-api.rst | 70 ++++
>> drivers/infiniband/core/umem_odp.c | 250 +++++---------
>>> drivers/infiniband/hw/mlx5/mlx5_ib.h | 12 +-
>>> drivers/infiniband/hw/mlx5/odp.c | 65 ++--
>>> drivers/infiniband/hw/mlx5/umr.c | 12 +-
>>> drivers/iommu/dma-iommu.c | 468
>>> +++++++++++++++++++++++----
>>> drivers/iommu/iommu.c | 84 ++---
>>> drivers/pci/p2pdma.c | 38 +--
>>> drivers/vfio/pci/mlx5/cmd.c | 375 +++++++++++----------
>>> drivers/vfio/pci/mlx5/cmd.h | 35 +-
>>> drivers/vfio/pci/mlx5/main.c | 87 +++--
>>> include/linux/dma-map-ops.h | 54 ----
>>> include/linux/dma-mapping.h | 85 +++++
>>> include/linux/hmm-dma.h | 33 ++
>>> include/linux/hmm.h | 21 ++
>>> include/linux/iommu.h | 4 +
>>> include/linux/pci-p2pdma.h | 84 +++++
>>> include/rdma/ib_umem_odp.h | 25 +-
>>> kernel/dma/direct.c | 44 +--
>>> kernel/dma/mapping.c | 18 ++
>>> mm/hmm.c | 264 +++++++++++++--
>>> 21 files changed, 1435 insertions(+), 693 deletions(-)
>>> create mode 100644 include/linux/hmm-dma.h
>>
>> Kind reminder.
>
> ...that you've simply reposted the same thing again? Without doing
> anything to address the bugs, inconsistencies, fundamental design
> flaws in claiming to be something it cannot possibly be, the egregious
> abuse of DMA_ATTR_SKIP_CPU_SYNC proudly highlighting how
> unfit-for-purpose the most basic part of the whole idea is, nor
> *still* the complete lack of any demonstrable justification of how
> callers who supposedly can't use the IOMMU API actually benefit from
> adding all the complexity of using the IOMMU API in a hat but also
> still the streaming DMA API as well?
>
> Yeah, consider me reminded.
>
>
>
> In case I need to make it any more explicit, NAK to this not-generic
> not-DMA-mapping API, until you can come up with either something which
> *can* actually work in any kind of vaguely generic manner as claimed,
> or instead settle on a reasonable special-case solution for
> justifiable special cases. Bikeshedding and rebasing through half a
> dozen versions, while ignoring fundamental issues I've been pointing
> out from the very beginning, has not somehow magically made this
> series mature and acceptable to merge.
>
> Honestly, given certain other scenarios we may also end up having to
> deal with, if by the time everything broken is taken away, it were to
> end up stripped all the way back to something well-reasoned like:
>
> "Some drivers want more control of their DMA buffer layout than the
> general-purpose IOVA allocator is able to provide though the DMA
> mapping APIs, but also would rather not have to deal with managing an
> entire IOMMU domain and address space, making MSIs work, etc. Expose
> iommu_dma_alloc_iova() and some trivial IOMMU API wrappers to allow
> drivers of coherent devices to claim regions of the default domain
> wherein they can manage their own mappings directly."
>
> ...I wouldn't necessarily disagree.
Well, this is definitely not a review I've expected. I admit that I
wasn't involved in this proposal nor the discussion about it and I
wasn't able to devote enough time for keeping myself up to date. Now
I've tried to read all the required backlog and I must admit that this
was quite demanding.
If You didn't like this design from the beginning, then please state
that early instead of pointing random minor issues in the code. There
have been plenty of time to discuss the overall approach if You think it
was wrong. What do to now?
Removing the need for scatterlists was advertised as the main goal of
this new API, but it looks that similar effects can be achieved with
just iterating over the pages and calling page-based DMA API directly.
Maybe I missed something. I still see some advantages in this DMA API
extension, but I would also like to see the clear benefits from
introducing it, like perf logs or other benchmark summary.
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
Powered by blists - more mailing lists