[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3927f2b4-a9ec-4717-86f6-6d5ac4e89804@samsung.com>
Date: Tue, 15 Apr 2025 10:12:56 +0200
From: Marek Szyprowski <m.szyprowski@...sung.com>
To: Balbir Singh <balbirs@...dia.com>, iommu@...ts.linux.dev
Cc: linux-kernel@...r.kernel.org, Robin Murphy <robin.murphy@....com>,
Christian König <christian.koenig@....com>, Ingo Molnar
<mingo@...nel.org>, Kees Cook <kees@...nel.org>, Bjorn Helgaas
<bhelgaas@...gle.com>, Linus Torvalds <torvalds@...ux-foundation.org>, Peter
Zijlstra <peterz@...radead.org>, Andy Lutomirski <luto@...nel.org>, Alex
Deucher <alexander.deucher@....com>, Bert Karwatzki <spasswolf@....de>,
Christoph Hellwig <hch@...radead.org>
Subject: Re: [v2] dma/mapping.c: dev_dbg support for dma_addressing_limited
On 14.04.2025 13:37, Balbir Singh wrote:
> In the debug and resolution of an issue involving forced use of bounce
> buffers, 7170130e4c72 ("x86/mm/init: Handle the special case of device
> private pages in add_pages(), to not increase max_pfn and trigger
> dma_addressing_limited() bounce buffers"). It would have been easier
> to debug the issue if dma_addressing_limited() had debug information
> about the device not being able to address all of memory and thus forcing
> all accesses through a bounce buffer. Please see[2]
>
> Implement dev_dbg to debug the potential use of bounce buffers
> when we hit the condition. When swiotlb is used,
> dma_addressing_limited() is used to determine the size of maximum dma
> buffer size in dma_direct_max_mapping_size(). The debug prints could be
> triggered in that check as well (when enabled).
>
> Link: https://lore.kernel.org/lkml/20250401000752.249348-1-balbirs@nvidia.com/ [1]
> Link: https://lore.kernel.org/lkml/20250310112206.4168-1-spasswolf@web.de/ [2]
>
> Cc: Marek Szyprowski <m.szyprowski@...sung.com>
> Cc: Robin Murphy <robin.murphy@....com>
> Cc: "Christian König" <christian.koenig@....com>
> Cc: Ingo Molnar <mingo@...nel.org>
> Cc: Kees Cook <kees@...nel.org>
> Cc: Bjorn Helgaas <bhelgaas@...gle.com>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Andy Lutomirski <luto@...nel.org>
> Cc: Alex Deucher <alexander.deucher@....com>
> Cc: Bert Karwatzki <spasswolf@....de>
> Cc: Christoph Hellwig <hch@...radead.org>
>
> Signed-off-by: Balbir Singh <balbirs@...dia.com>
Thanks, applied to dma-mapping-fixes branch.
> ---
> Changelog v2:
> - Change the debug message to be factual
> - Convert WARN_ONCE to dev_dbg
>
> Testing:
> - Limited testing on a VM, could not trigger the debug message on the machine
>
>
>
> kernel/dma/mapping.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index cda127027e48..67da08fa6723 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -918,7 +918,7 @@ EXPORT_SYMBOL(dma_set_coherent_mask);
> * the system, else %false. Lack of addressing bits is the prime reason for
> * bounce buffering, but might not be the only one.
> */
> -bool dma_addressing_limited(struct device *dev)
> +static bool __dma_addressing_limited(struct device *dev)
> {
> const struct dma_map_ops *ops = get_dma_ops(dev);
>
> @@ -930,6 +930,15 @@ bool dma_addressing_limited(struct device *dev)
> return false;
> return !dma_direct_all_ram_mapped(dev);
> }
> +
> +bool dma_addressing_limited(struct device *dev)
> +{
> + if (!__dma_addressing_limited(dev))
> + return false;
> +
> + dev_dbg(dev, "device is DMA addressing limited\n");
> + return true;
> +}
> EXPORT_SYMBOL_GPL(dma_addressing_limited);
>
> size_t dma_max_mapping_size(struct device *dev)
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
Powered by blists - more mailing lists