lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4xYqseJMFXOU39JJW4Lk2ZHXAnRJLhZdVuFLxAi=Dy5sw@mail.gmail.com>
Date: Wed, 7 Jan 2026 08:47:36 +1300
From: Barry Song <21cnbao@...il.com>
To: Robin Murphy <robin.murphy@....com>
Cc: Leon Romanovsky <leon@...nel.org>, catalin.marinas@....com, m.szyprowski@...sung.com, 
	will@...nel.org, iommu@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org, 
	linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org, 
	Ada Couprie Diaz <ada.coupriediaz@....com>, Ard Biesheuvel <ardb@...nel.org>, Marc Zyngier <maz@...nel.org>, 
	Anshuman Khandual <anshuman.khandual@....com>, Ryan Roberts <ryan.roberts@....com>, 
	Suren Baghdasaryan <surenb@...gle.com>, Tangquan Zheng <zhengtangquan@...o.com>
Subject: Re: [PATCH v2 5/8] dma-mapping: Support batch mode for dma_direct_sync_sg_for_*

On Wed, Jan 7, 2026 at 8:12 AM Robin Murphy <robin.murphy@....com> wrote:
>
> On 2026-01-06 6:41 pm, Barry Song wrote:
> > On Mon, Dec 29, 2025 at 3:50 AM Leon Romanovsky <leon@...nel.org> wrote:
> >>
> >> On Sun, Dec 28, 2025 at 09:52:05AM +1300, Barry Song wrote:
> >>> On Sun, Dec 28, 2025 at 9:09 AM Leon Romanovsky <leon@...nel.org> wrote:
> >>>>
> >>>> On Sat, Dec 27, 2025 at 11:52:45AM +1300, Barry Song wrote:
> >>>>> From: Barry Song <baohua@...nel.org>
> >>>>>
> >>>>> Instead of performing a flush per SG entry, issue all cache
> >>>>> operations first and then flush once. This ultimately benefits
> >>>>> __dma_sync_sg_for_cpu() and __dma_sync_sg_for_device().
> >>>>>
> >>>>> Cc: Leon Romanovsky <leon@...nel.org>
> >>>>> Cc: Catalin Marinas <catalin.marinas@....com>
> >>>>> Cc: Will Deacon <will@...nel.org>
> >>>>> Cc: Marek Szyprowski <m.szyprowski@...sung.com>
> >>>>> Cc: Robin Murphy <robin.murphy@....com>
> >>>>> Cc: Ada Couprie Diaz <ada.coupriediaz@....com>
> >>>>> Cc: Ard Biesheuvel <ardb@...nel.org>
> >>>>> Cc: Marc Zyngier <maz@...nel.org>
> >>>>> Cc: Anshuman Khandual <anshuman.khandual@....com>
> >>>>> Cc: Ryan Roberts <ryan.roberts@....com>
> >>>>> Cc: Suren Baghdasaryan <surenb@...gle.com>
> >>>>> Cc: Tangquan Zheng <zhengtangquan@...o.com>
> >>>>> Signed-off-by: Barry Song <baohua@...nel.org>
> >>>>> ---
> >>>>>   kernel/dma/direct.c | 14 +++++++-------
> >>>>>   1 file changed, 7 insertions(+), 7 deletions(-)
> >>>>
> >>>> <...>
> >>>>
> >>>>> -             if (!dev_is_dma_coherent(dev)) {
> >>>>> +             if (!dev_is_dma_coherent(dev))
> >>>>>                        arch_sync_dma_for_device(paddr, sg->length,
> >>>>>                                        dir);
> >>>>> -                     arch_sync_dma_flush();
> >>>>> -             }
> >>>>>        }
> >>>>> +     if (!dev_is_dma_coherent(dev))
> >>>>> +             arch_sync_dma_flush();
> >>>>
> >>>> This patch should be squashed into the previous one. You introduced
> >>>> arch_sync_dma_flush() there, and now you are placing it elsewhere.
> >>>
> >>> Hi Leon,
> >>>
> >>> The previous patch replaces all arch_sync_dma_for_* calls with
> >>> arch_sync_dma_for_* plus arch_sync_dma_flush(), without any
> >>> functional change. The subsequent patches then implement the
> >>> actual batching. I feel this is a better approach for reviewing
> >>> each change independently. Otherwise, the previous patch would
> >>> be too large.
> >>
> >> Don't worry about it. Your patches are small enough.
> >
> > My hardware does not require a bounce buffer, but I am concerned that
> > this patch may be incorrect for systems that do require one.
> >
> > Now it is:
> >
> > void dma_direct_sync_sg_for_cpu(struct device *dev,
> >                  struct scatterlist *sgl, int nents, enum dma_data_direction dir)
> > {
> >          struct scatterlist *sg;
> >          int i;
> >
> >          for_each_sg(sgl, sg, nents, i) {
> >                  phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
> >
> >                  if (!dev_is_dma_coherent(dev))
> >                          arch_sync_dma_for_cpu(paddr, sg->length, dir);
> >
> >                  swiotlb_sync_single_for_cpu(dev, paddr, sg->length, dir);
> >
> >                  if (dir == DMA_FROM_DEVICE)
> >                          arch_dma_mark_clean(paddr, sg->length);
> >          }
> >
> >          if (!dev_is_dma_coherent(dev)) {
> >                  arch_sync_dma_flush();
> >                  arch_sync_dma_for_cpu_all();
> >          }
> > }
> >
> > Should we call swiotlb_sync_single_for_cpu() and
> > arch_dma_mark_clean() after the flush to ensure the CPU sees the
> > latest data and that the memcpy is correct? I mean:
>
> Yes, this and the equivalents in the later patches are broken for all
> the sync_for_cpu and unmap paths which may end up bouncing (beware some
> of them get a bit fiddly) - any cache maintenance *must* be completed
> before calling SWIOTLB. As for mark_clean, IIRC that was an IA-64 thing,
> and appears to be entirely dead now.

Thanks, Robin. Personally, I would prefer an approach like the one below—
that is, not optimizing the bounce buffer cases, as they are already slow
due to hardware limitations with memcpy, and optimizing them would make
the code quite messy.

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 550a1a13148d..a4840f7e8722 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -423,8 +423,11 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
        for_each_sg(sgl, sg, nents, i) {
                phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));

-               if (!dev_is_dma_coherent(dev))
+               if (!dev_is_dma_coherent(dev)) {
                        arch_sync_dma_for_cpu(paddr, sg->length, dir);
+                       if (unlikely(dev->dma_io_tlb_mem))
+                               arch_sync_dma_flush();
+               }

                swiotlb_sync_single_for_cpu(dev, paddr, sg->length, dir);

I’d like to check with you, Leon, and Marek on your views about this.

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ