[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251221115523.GI13030@unreal>
Date: Sun, 21 Dec 2025 13:55:23 +0200
From: Leon Romanovsky <leon@...nel.org>
To: Barry Song <21cnbao@...il.com>
Cc: catalin.marinas@....com, m.szyprowski@...sung.com, robin.murphy@....com,
will@...nel.org, ada.coupriediaz@....com, anshuman.khandual@....com,
ardb@...nel.org, iommu@...ts.linux.dev,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
maz@...nel.org, ryan.roberts@....com, surenb@...gle.com,
v-songbaohua@...o.com, zhengtangquan@...o.com
Subject: Re: [PATCH 5/6] dma-mapping: Allow batched DMA sync operations if
supported by the arch
On Fri, Dec 19, 2025 at 01:36:57PM +0800, Barry Song wrote:
> From: Barry Song <v-songbaohua@...o.com>
>
> This enables dma_direct_sync_sg_for_device, dma_direct_sync_sg_for_cpu,
> dma_direct_map_sg, and dma_direct_unmap_sg to use batched DMA sync
> operations when possible. This significantly improves performance on
> devices without hardware cache coherence.
>
> Tangquan's initial results show that batched synchronization can reduce
> dma_map_sg() time by 64.61% and dma_unmap_sg() time by 66.60% on an MTK
> phone platform (MediaTek Dimensity 9500). The tests were performed by
> pinning the task to CPU7 and fixing the CPU frequency at 2.6 GHz,
> running dma_map_sg() and dma_unmap_sg() on 10 MB buffers (10 MB / 4 KB
> sg entries per buffer) for 200 iterations and then averaging the
> results.
>
> Cc: Catalin Marinas <catalin.marinas@....com>
> Cc: Will Deacon <will@...nel.org>
> Cc: Marek Szyprowski <m.szyprowski@...sung.com>
> Cc: Robin Murphy <robin.murphy@....com>
> Cc: Ada Couprie Diaz <ada.coupriediaz@....com>
> Cc: Ard Biesheuvel <ardb@...nel.org>
> Cc: Marc Zyngier <maz@...nel.org>
> Cc: Anshuman Khandual <anshuman.khandual@....com>
> Cc: Ryan Roberts <ryan.roberts@....com>
> Cc: Suren Baghdasaryan <surenb@...gle.com>
> Cc: Tangquan Zheng <zhengtangquan@...o.com>
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> ---
> kernel/dma/direct.c | 28 ++++++++++-----
> kernel/dma/direct.h | 86 +++++++++++++++++++++++++++++++++++++++------
> 2 files changed, 95 insertions(+), 19 deletions(-)
<...>
> if (!dev_is_dma_coherent(dev))
> - arch_sync_dma_for_device(paddr, sg->length,
> - dir);
> + arch_sync_dma_for_device_batch_add(paddr, sg->length, dir);
<...>
> -static inline dma_addr_t dma_direct_map_phys(struct device *dev,
> +#ifdef CONFIG_ARCH_WANT_BATCHED_DMA_SYNC
> +static inline void dma_direct_sync_single_for_cpu_batch_add(struct device *dev,
> + dma_addr_t addr, size_t size, enum dma_data_direction dir)
> +{
> + phys_addr_t paddr = dma_to_phys(dev, addr);
> +
> + if (!dev_is_dma_coherent(dev))
> + arch_sync_dma_for_cpu_batch_add(paddr, size, dir);
> +
> + __dma_direct_sync_single_for_cpu(dev, paddr, size, dir);
> +}
> +#endif
> +
> +static inline void dma_direct_sync_single_for_cpu(struct device *dev,
> + dma_addr_t addr, size_t size, enum dma_data_direction dir)
> +{
> + phys_addr_t paddr = dma_to_phys(dev, addr);
> +
> + if (!dev_is_dma_coherent(dev))
> + arch_sync_dma_for_cpu(paddr, size, dir);
> +
> + __dma_direct_sync_single_for_cpu(dev, paddr, size, dir);
> +}
> +
I'm wondering why you don't implement this batch‑sync support inside the
arch_sync_dma_*() functions. Doing so would minimize changes to the generic
kernel/dma/* code and reduce the amount of #ifdef‑based spaghetti.
Thanks."
Powered by blists - more mailing lists