[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4yuvHNqHDi8eN-8UoY2McoXUeCMmbjFAr=jSdv8GpGKeg@mail.gmail.com>
Date: Tue, 23 Dec 2025 13:02:55 +1300
From: Barry Song <21cnbao@...il.com>
To: Leon Romanovsky <leon@...nel.org>
Cc: ada.coupriediaz@....com, anshuman.khandual@....com, ardb@...nel.org,
catalin.marinas@....com, iommu@...ts.linux.dev,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
m.szyprowski@...sung.com, maz@...nel.org, robin.murphy@....com,
ryan.roberts@....com, surenb@...gle.com, v-songbaohua@...o.com,
will@...nel.org, zhengtangquan@...o.com
Subject: Re: [PATCH 5/6] dma-mapping: Allow batched DMA sync operations if
supported by the arch
On Mon, Dec 22, 2025 at 9:49 PM Leon Romanovsky <leon@...nel.org> wrote:
>
> On Mon, Dec 22, 2025 at 03:24:58AM +0800, Barry Song wrote:
> > On Sun, Dec 21, 2025 at 7:55 PM Leon Romanovsky <leon@...nel.org> wrote:
> > [...]
> > > > +
> > >
> > > I'm wondering why you don't implement this batch‑sync support inside the
> > > arch_sync_dma_*() functions. Doing so would minimize changes to the generic
> > > kernel/dma/* code and reduce the amount of #ifdef‑based spaghetti.
> > >
> >
> > There are two cases: mapping an sg list and mapping a single
> > buffer. The former can be batched with
> > arch_sync_dma_*_batch_add() and flushed via
> > arch_sync_dma_batch_flush(), while the latter requires all work to
> > be done inside arch_sync_dma_*(). Therefore,
> > arch_sync_dma_*() cannot always batch and flush.
>
> Probably in all cases you can call the _batch_ variant, followed by _flush_,
> even when handling a single page. This keeps the code consistent across all
> paths. On platforms that do not support _batch_, the _flush_ operation will be
> a NOP anyway.
We have a lot of code outside kernel/dma that also calls
arch_sync_dma_for_* such as arch/arm, arch/mips, drivers/xen,
I guess we don’t want to modify so many things?
for kernel/dma, we have two "single" callers only:
kernel/dma/direct.h, kernel/dma/swiotlb.c. and they looks quite
straightforward:
static inline void dma_direct_sync_single_for_device(struct device *dev,
dma_addr_t addr, size_t size, enum dma_data_direction dir)
{
phys_addr_t paddr = dma_to_phys(dev, addr);
swiotlb_sync_single_for_device(dev, paddr, size, dir);
if (!dev_is_dma_coherent(dev))
arch_sync_dma_for_device(paddr, size, dir);
}
I guess moving to arch_sync_dma_for_device_batch + flush
doesn’t really look much better, does it?
>
> I would also rename arch_sync_dma_batch_flush() to arch_sync_dma_flush().
Sure.
>
> You can also minimize changes in dma_direct_map_phys() too, by extending
> it's signature to provide if flush is needed or not.
Yes. I have
static inline dma_addr_t __dma_direct_map_phys(struct device *dev,
phys_addr_t phys, size_t size, enum dma_data_direction dir,
unsigned long attrs, bool flush)
and two wrappers:
static inline dma_addr_t dma_direct_map_phys(struct device *dev,
phys_addr_t phys, size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
return __dma_direct_map_phys(dev, phys, size, dir, attrs, true);
}
static inline dma_addr_t dma_direct_map_phys_batch_add(struct device *dev,
phys_addr_t phys, size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
return __dma_direct_map_phys(dev, phys, size, dir, attrs, false);
}
If you prefer exposing "flush" directly in dma_direct_map_phys()
and updating its callers with flush=true, I think that’s fine.
It could be also true for dma_direct_sync_single_for_device().
>
> dma_direct_map_phys(....) -> dma_direct_map_phys(...., bool flush):
>
> static inline dma_addr_t dma_direct_map_phys(...., bool flush)
> {
> ....
>
> if (dma_addr != DMA_MAPPING_ERROR && !dev_is_dma_coherent(dev) &&
> !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
> {
> arch_sync_dma_for_device(phys, size, dir);
> if (flush)
> arch_sync_dma_flush();
> }
> }
>
Thanks
Barry
Powered by blists - more mailing lists