[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ccc8eeba-757a-440d-80d3-9158e80c19fe@csgroup.eu>
Date: Thu, 14 Aug 2025 21:05:42 +0200
From: Christophe Leroy <christophe.leroy@...roup.eu>
To: Leon Romanovsky <leon@...nel.org>,
Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Jason Gunthorpe <jgg@...dia.com>,
Abdiel Janulgue <abdiel.janulgue@...il.com>,
Alexander Potapenko <glider@...gle.com>, Alex Gaynor
<alex.gaynor@...il.com>, Andrew Morton <akpm@...ux-foundation.org>,
Christoph Hellwig <hch@....de>, Danilo Krummrich <dakr@...nel.org>,
iommu@...ts.linux.dev, Jason Wang <jasowang@...hat.com>,
Jens Axboe <axboe@...nel.dk>, Joerg Roedel <joro@...tes.org>,
Jonathan Corbet <corbet@....net>, Juergen Gross <jgross@...e.com>,
kasan-dev@...glegroups.com, Keith Busch <kbusch@...nel.org>,
linux-block@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-nvme@...ts.infradead.org, linuxppc-dev@...ts.ozlabs.org,
linux-trace-kernel@...r.kernel.org, Madhavan Srinivasan
<maddy@...ux.ibm.com>, Masami Hiramatsu <mhiramat@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>, "Michael S. Tsirkin"
<mst@...hat.com>, Miguel Ojeda <ojeda@...nel.org>,
Robin Murphy <robin.murphy@....com>, rust-for-linux@...r.kernel.org,
Sagi Grimberg <sagi@...mberg.me>, Stefano Stabellini
<sstabellini@...nel.org>, Steven Rostedt <rostedt@...dmis.org>,
virtualization@...ts.linux.dev, Will Deacon <will@...nel.org>,
xen-devel@...ts.xenproject.org
Subject: Re: [PATCH v3 00/16] dma-mapping: migrate to physical address-based
API
Le 14/08/2025 à 19:53, Leon Romanovsky a écrit :
> Changelog:
> v3:
> * Fixed typo in "cacheable" word
> * Simplified kmsan patch a lot to be simple argument refactoring
v2 sent today at 12:13, v3 sent today at 19:53 .... for only that ?
Have you read
https://docs.kernel.org//process/submitting-patches.html#don-t-get-discouraged-or-impatient
?
Thanks
Christophe
> v2: https://lore.kernel.org/all/cover.1755153054.git.leon@kernel.org
> * Used commit messages and cover letter from Jason
> * Moved setting IOMMU_MMIO flag to dma_info_to_prot function
> * Micro-optimized the code
> * Rebased code on v6.17-rc1
> v1: https://lore.kernel.org/all/cover.1754292567.git.leon@kernel.org
> * Added new DMA_ATTR_MMIO attribute to indicate
> PCI_P2PDMA_MAP_THRU_HOST_BRIDGE path.
> * Rewrote dma_map_* functions to use thus new attribute
> v0: https://lore.kernel.org/all/cover.1750854543.git.leon@kernel.org/
> ------------------------------------------------------------------------
>
> This series refactors the DMA mapping to use physical addresses
> as the primary interface instead of page+offset parameters. This
> change aligns the DMA API with the underlying hardware reality where
> DMA operations work with physical addresses, not page structures.
>
> The series maintains export symbol backward compatibility by keeping
> the old page-based API as wrapper functions around the new physical
> address-based implementations.
>
> This series refactors the DMA mapping API to provide a phys_addr_t
> based, and struct-page free, external API that can handle all the
> mapping cases we want in modern systems:
>
> - struct page based cachable DRAM
> - struct page MEMORY_DEVICE_PCI_P2PDMA PCI peer to peer non-cachable
> MMIO
> - struct page-less PCI peer to peer non-cachable MMIO
> - struct page-less "resource" MMIO
>
> Overall this gets much closer to Matthew's long term wish for
> struct-pageless IO to cachable DRAM. The remaining primary work would
> be in the mm side to allow kmap_local_pfn()/phys_to_virt() to work on
> phys_addr_t without a struct page.
>
> The general design is to remove struct page usage entirely from the
> DMA API inner layers. For flows that need to have a KVA for the
> physical address they can use kmap_local_pfn() or phys_to_virt(). This
> isolates the struct page requirements to MM code only. Long term all
> removals of struct page usage are supporting Matthew's memdesc
> project which seeks to substantially transform how struct page works.
>
> Instead make the DMA API internals work on phys_addr_t. Internally
> there are still dedicated 'page' and 'resource' flows, except they are
> now distinguished by a new DMA_ATTR_MMIO instead of by callchain. Both
> flows use the same phys_addr_t.
>
> When DMA_ATTR_MMIO is specified things work similar to the existing
> 'resource' flow. kmap_local_pfn(), phys_to_virt(), phys_to_page(),
> pfn_valid(), etc are never called on the phys_addr_t. This requires
> rejecting any configuration that would need swiotlb. CPU cache
> flushing is not required, and avoided, as ATTR_MMIO also indicates the
> address have no cachable mappings. This effectively removes any
> DMA API side requirement to have struct page when DMA_ATTR_MMIO is
> used.
>
> In the !DMA_ATTR_MMIO mode things work similarly to the 'page' flow,
> except on the common path of no cache flush, no swiotlb it never
> touches a struct page. When cache flushing or swiotlb copying
> kmap_local_pfn()/phys_to_virt() are used to get a KVA for CPU
> usage. This was already the case on the unmap side, now the map side
> is symmetric.
>
> Callers are adjusted to set DMA_ATTR_MMIO. Existing 'resource' users
> must set it. The existing struct page based MEMORY_DEVICE_PCI_P2PDMA
> path must also set it. This corrects some existing bugs where iommu
> mappings for P2P MMIO were improperly marked IOMMU_CACHE.
>
> Since ATTR_MMIO is made to work with all the existing DMA map entry
> points, particularly dma_iova_link(), this finally allows a way to use
> the new DMA API to map PCI P2P MMIO without creating struct page. The
> VFIO DMABUF series demonstrates how this works. This is intended to
> replace the incorrect driver use of dma_map_resource() on PCI BAR
> addresses.
>
> This series does the core code and modern flows. A followup series
> will give the same treatment to the legacy dma_ops implementation.
>
> Thanks
>
> Leon Romanovsky (16):
> dma-mapping: introduce new DMA attribute to indicate MMIO memory
> iommu/dma: implement DMA_ATTR_MMIO for dma_iova_link().
> dma-debug: refactor to use physical addresses for page mapping
> dma-mapping: rename trace_dma_*map_page to trace_dma_*map_phys
> iommu/dma: rename iommu_dma_*map_page to iommu_dma_*map_phys
> iommu/dma: extend iommu_dma_*map_phys API to handle MMIO memory
> dma-mapping: convert dma_direct_*map_page to be phys_addr_t based
> kmsan: convert kmsan_handle_dma to use physical addresses
> dma-mapping: handle MMIO flow in dma_map|unmap_page
> xen: swiotlb: Open code map_resource callback
> dma-mapping: export new dma_*map_phys() interface
> mm/hmm: migrate to physical address-based DMA mapping API
> mm/hmm: properly take MMIO path
> block-dma: migrate to dma_map_phys instead of map_page
> block-dma: properly take MMIO path
> nvme-pci: unmap MMIO pages with appropriate interface
>
> Documentation/core-api/dma-api.rst | 4 +-
> Documentation/core-api/dma-attributes.rst | 18 ++++
> arch/powerpc/kernel/dma-iommu.c | 4 +-
> block/blk-mq-dma.c | 15 ++-
> drivers/iommu/dma-iommu.c | 61 +++++------
> drivers/nvme/host/pci.c | 18 +++-
> drivers/virtio/virtio_ring.c | 4 +-
> drivers/xen/swiotlb-xen.c | 21 +++-
> include/linux/blk-mq-dma.h | 6 +-
> include/linux/blk_types.h | 2 +
> include/linux/dma-direct.h | 2 -
> include/linux/dma-map-ops.h | 8 +-
> include/linux/dma-mapping.h | 33 ++++++
> include/linux/iommu-dma.h | 11 +-
> include/linux/kmsan.h | 9 +-
> include/trace/events/dma.h | 9 +-
> kernel/dma/debug.c | 71 ++++---------
> kernel/dma/debug.h | 37 ++-----
> kernel/dma/direct.c | 22 +---
> kernel/dma/direct.h | 52 ++++++----
> kernel/dma/mapping.c | 117 +++++++++++++---------
> kernel/dma/ops_helpers.c | 6 +-
> mm/hmm.c | 19 ++--
> mm/kmsan/hooks.c | 7 +-
> rust/kernel/dma.rs | 3 +
> tools/virtio/linux/kmsan.h | 2 +-
> 26 files changed, 306 insertions(+), 255 deletions(-)
>
Powered by blists - more mailing lists