[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5fc1f0ca52a85834b3e978c5d6a3171d7dd3c194.1750854543.git.leon@kernel.org>
Date: Wed, 25 Jun 2025 16:19:03 +0300
From: Leon Romanovsky <leon@...nel.org>
To: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Leon Romanovsky <leonro@...dia.com>,
Christoph Hellwig <hch@....de>,
Jonathan Corbet <corbet@....net>,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Robin Murphy <robin.murphy@....com>,
Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Eugenio Pérez <eperezma@...hat.com>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Jérôme Glisse <jglisse@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org,
iommu@...ts.linux.dev,
virtualization@...ts.linux.dev,
kasan-dev@...glegroups.com,
linux-trace-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [PATCH 6/8] dma-mapping: fail early if physical address is mapped through platform callback
From: Leon Romanovsky <leonro@...dia.com>
All platforms which implement map_page interface don't support physical
addresses without real struct page. Add condition to check it.
Signed-off-by: Leon Romanovsky <leonro@...dia.com>
---
kernel/dma/mapping.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 709405d46b2b..74efb6909103 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -158,6 +158,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page,
{
const struct dma_map_ops *ops = get_dma_ops(dev);
phys_addr_t phys = page_to_phys(page) + offset;
+ bool is_pfn_valid = true;
dma_addr_t addr;
BUG_ON(!valid_dma_direction(dir));
@@ -170,8 +171,20 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page,
addr = dma_direct_map_phys(dev, phys, size, dir, attrs);
else if (use_dma_iommu(dev))
addr = iommu_dma_map_phys(dev, phys, size, dir, attrs);
- else
+ else {
+ if (IS_ENABLED(CONFIG_DMA_API_DEBUG))
+ is_pfn_valid = pfn_valid(PHYS_PFN(phys));
+
+ if (unlikely(!is_pfn_valid))
+ return DMA_MAPPING_ERROR;
+
+ /*
+ * All platforms which implement .map_page() don't support
+ * non-struct page backed addresses.
+ */
addr = ops->map_page(dev, page, offset, size, dir, attrs);
+ }
+
kmsan_handle_dma(phys, size, dir);
trace_dma_map_phys(dev, phys, addr, size, dir, attrs);
debug_dma_map_phys(dev, phys, size, dir, addr, attrs);
--
2.49.0
Powered by blists - more mailing lists