[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250429235122.537321-12-sashal@kernel.org>
Date: Tue, 29 Apr 2025 19:50:57 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Cc: Balbir Singh <balbirs@...dia.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Christian König <christian.koenig@....com>,
Ingo Molnar <mingo@...nel.org>,
Kees Cook <kees@...nel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
Alex Deucher <alexander.deucher@....com>,
Bert Karwatzki <spasswolf@....de>,
Christoph Hellwig <hch@...radead.org>,
Christoph Hellwig <hch@....de>,
Sasha Levin <sashal@...nel.org>,
iommu@...ts.linux.dev
Subject: [PATCH AUTOSEL 6.12 12/37] dma/mapping.c: dev_dbg support for dma_addressing_limited
From: Balbir Singh <balbirs@...dia.com>
[ Upstream commit 2042c352e21d19eaf5f9e22fb6afce72293ef28c ]
In the debug and resolution of an issue involving forced use of bounce
buffers, 7170130e4c72 ("x86/mm/init: Handle the special case of device
private pages in add_pages(), to not increase max_pfn and trigger
dma_addressing_limited() bounce buffers"). It would have been easier
to debug the issue if dma_addressing_limited() had debug information
about the device not being able to address all of memory and thus forcing
all accesses through a bounce buffer. Please see[2]
Implement dev_dbg to debug the potential use of bounce buffers
when we hit the condition. When swiotlb is used,
dma_addressing_limited() is used to determine the size of maximum dma
buffer size in dma_direct_max_mapping_size(). The debug prints could be
triggered in that check as well (when enabled).
Link: https://lore.kernel.org/lkml/20250401000752.249348-1-balbirs@nvidia.com/ [1]
Link: https://lore.kernel.org/lkml/20250310112206.4168-1-spasswolf@web.de/ [2]
Cc: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Robin Murphy <robin.murphy@....com>
Cc: "Christian König" <christian.koenig@....com>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Kees Cook <kees@...nel.org>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Alex Deucher <alexander.deucher@....com>
Cc: Bert Karwatzki <spasswolf@....de>
Cc: Christoph Hellwig <hch@...radead.org>
Signed-off-by: Balbir Singh <balbirs@...dia.com>
Reviewed-by: Christoph Hellwig <hch@....de>
Signed-off-by: Marek Szyprowski <m.szyprowski@...sung.com>
Link: https://lore.kernel.org/r/20250414113752.3298276-1-balbirs@nvidia.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/dma/mapping.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 864a1121bf086..f7366083b4d00 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -905,7 +905,7 @@ EXPORT_SYMBOL(dma_set_coherent_mask);
* the system, else %false. Lack of addressing bits is the prime reason for
* bounce buffering, but might not be the only one.
*/
-bool dma_addressing_limited(struct device *dev)
+static bool __dma_addressing_limited(struct device *dev)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
@@ -917,6 +917,15 @@ bool dma_addressing_limited(struct device *dev)
return false;
return !dma_direct_all_ram_mapped(dev);
}
+
+bool dma_addressing_limited(struct device *dev)
+{
+ if (!__dma_addressing_limited(dev))
+ return false;
+
+ dev_dbg(dev, "device is DMA addressing limited\n");
+ return true;
+}
EXPORT_SYMBOL_GPL(dma_addressing_limited);
size_t dma_max_mapping_size(struct device *dev)
--
2.39.5
Powered by blists - more mailing lists