[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4yA83-K7PXiEtyidzF_j6qqKkt92z485KBS9+zGe_rjnw@mail.gmail.com>
Date: Sun, 28 Dec 2025 09:52:05 +1300
From: Barry Song <21cnbao@...il.com>
To: Leon Romanovsky <leon@...nel.org>
Cc: catalin.marinas@....com, m.szyprowski@...sung.com, robin.murphy@....com,
will@...nel.org, iommu@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org,
Ada Couprie Diaz <ada.coupriediaz@....com>, Ard Biesheuvel <ardb@...nel.org>, Marc Zyngier <maz@...nel.org>,
Anshuman Khandual <anshuman.khandual@....com>, Ryan Roberts <ryan.roberts@....com>,
Suren Baghdasaryan <surenb@...gle.com>, Tangquan Zheng <zhengtangquan@...o.com>
Subject: Re: [PATCH v2 5/8] dma-mapping: Support batch mode for dma_direct_sync_sg_for_*
On Sun, Dec 28, 2025 at 9:09 AM Leon Romanovsky <leon@...nel.org> wrote:
>
> On Sat, Dec 27, 2025 at 11:52:45AM +1300, Barry Song wrote:
> > From: Barry Song <baohua@...nel.org>
> >
> > Instead of performing a flush per SG entry, issue all cache
> > operations first and then flush once. This ultimately benefits
> > __dma_sync_sg_for_cpu() and __dma_sync_sg_for_device().
> >
> > Cc: Leon Romanovsky <leon@...nel.org>
> > Cc: Catalin Marinas <catalin.marinas@....com>
> > Cc: Will Deacon <will@...nel.org>
> > Cc: Marek Szyprowski <m.szyprowski@...sung.com>
> > Cc: Robin Murphy <robin.murphy@....com>
> > Cc: Ada Couprie Diaz <ada.coupriediaz@....com>
> > Cc: Ard Biesheuvel <ardb@...nel.org>
> > Cc: Marc Zyngier <maz@...nel.org>
> > Cc: Anshuman Khandual <anshuman.khandual@....com>
> > Cc: Ryan Roberts <ryan.roberts@....com>
> > Cc: Suren Baghdasaryan <surenb@...gle.com>
> > Cc: Tangquan Zheng <zhengtangquan@...o.com>
> > Signed-off-by: Barry Song <baohua@...nel.org>
> > ---
> > kernel/dma/direct.c | 14 +++++++-------
> > 1 file changed, 7 insertions(+), 7 deletions(-)
>
> <...>
>
> > - if (!dev_is_dma_coherent(dev)) {
> > + if (!dev_is_dma_coherent(dev))
> > arch_sync_dma_for_device(paddr, sg->length,
> > dir);
> > - arch_sync_dma_flush();
> > - }
> > }
> > + if (!dev_is_dma_coherent(dev))
> > + arch_sync_dma_flush();
>
> This patch should be squashed into the previous one. You introduced
> arch_sync_dma_flush() there, and now you are placing it elsewhere.
Hi Leon,
The previous patch replaces all arch_sync_dma_for_* calls with
arch_sync_dma_for_* plus arch_sync_dma_flush(), without any
functional change. The subsequent patches then implement the
actual batching. I feel this is a better approach for reviewing
each change independently. Otherwise, the previous patch would
be too large.
Thanks
Barry
Powered by blists - more mailing lists