[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.22.394.2505021007460.3879245@ubuntu-linux-20-04-desktop>
Date: Fri, 2 May 2025 10:20:20 -0700 (PDT)
From: Stefano Stabellini <sstabellini@...nel.org>
To: John Ernberg <john.ernberg@...ia.se>
cc: Juergen Gross <jgross@...e.com>,
Stefano Stabellini <sstabellini@...nel.org>,
Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>,
Catalin Marinas <catalin.marinas@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"imx@...ts.linux.dev" <imx@...ts.linux.dev>,
Christoph Hellwig <hch@...radead.org>
Subject: Re: [PATCH 2/2] xen: swiotlb: Implement map_resource callback
+Christoph
On Fri, 2 May 2025, John Ernberg wrote:
> Needed by the eDMA v3 DMA engine found in iommu-less SoCs such as iMX8QXP
> to be able to perform DMA operations as a Xen Hardware Domain, which needs
> to be able to do DMA in MMIO space.
>
> The callback implementation is basically the same as the one for direct
> mapping of resources, except this also takes into account Xen page
> mappings.
>
> There is nothing to do for unmap, just like with direct, so the unmap
> callback is not needed.
>
> Signed-off-by: John Ernberg <john.ernberg@...ia.se>
>
> ---
>
> I originally exported dma_direct_map_resource() and used that which
> appeared to work, but it felt like not checking Xen page mappings wasn't
> fully correct and I went with this. But if dma_direct_map_resource() is
> the correct approach here then I can submit that isntead.
> ---
> drivers/xen/swiotlb-xen.c | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index ef56a2500ed6..0f02fdd9128d 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -285,6 +285,20 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
> attrs, pool);
> }
>
> +static dma_addr_t xen_swiotlb_map_resource(struct device *dev, phys_addr_t phys,
> + size_t size, enum dma_data_direction dir,
> + unsigned long attrs)
> +{
> + dma_addr_t dev_addr = xen_phys_to_dma(dev, phys);
Yes, we need the xen_phys_to_dma call here. This is one of the reasons I
don't think we can use dma_direct_map_resource() to implement
map_resource
> + BUG_ON(dir == DMA_NONE);
> +
> + if (!dma_capable(dev, dev_addr, size, false))
> + return DMA_MAPPING_ERROR;
But here you also need to check that phys+size doesn't cross a page
boundary. You need to call range_straddles_page_boundary, like we do at
the beginning of xen_swiotlb_map_page.
If it is possible to cross a page boundary, then we need to basically to
do the same thing here as we do in xen_swiotlb_map_page where we check
for:
if (dma_capable(dev, dev_addr, size, true) &&
!range_straddles_page_boundary(phys, size) &&
!xen_arch_need_swiotlb(dev, phys, dev_addr) &&
!is_swiotlb_force_bounce(dev))
goto done;
if all is well, we go with the native path, otherwise we bounce on
swiotlb-xen.
> + return dev_addr;
> +}
> +
> static void
> xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
> size_t size, enum dma_data_direction dir)
> @@ -426,4 +440,5 @@ const struct dma_map_ops xen_swiotlb_dma_ops = {
> .alloc_pages_op = dma_common_alloc_pages,
> .free_pages = dma_common_free_pages,
> .max_mapping_size = swiotlb_max_mapping_size,
> + .map_resource = xen_swiotlb_map_resource,
> };
> --
> 2.49.0
>
Powered by blists - more mailing lists