lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aFFq19W0D5JeOyeI@arm.com>
Date: Tue, 17 Jun 2025 14:17:11 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Christian Meissl <meissl.christian@...il.com>
Cc: linux-arm-kernel@...ts.infradead.org,
	Russell King <linux@...linux.org.uk>,
	Christoph Hellwig <hch@....de>,
	Philipp Zabel <p.zabel@...gutronix.de>,
	linux-kernel@...r.kernel.org, linux-media@...r.kernel.org
Subject: Re: [PATCH] ARM/dma-mapping: invalidate caches on
 arch_dma_prep_coherent

On Tue, Jun 17, 2025 at 09:54:46AM +0200, Christian Meissl wrote:
> since switching to dma-direct, memory using DMA_ATTR_NO_KERNEL_MAPPING
> is no longer allocated using the arch specific handlers and instead
> will use dma_direct_alloc_no_mapping. While the arm specific allocation
> handlers implicitly clear the allocated dma buffers and will flush any caches
> dma-direct relies on ARCH_HAS_DMA_PREP_COHERENT to flush the caches.
> 
> Without this flush video frame corruption can occur in drivers
> like the coda v4l2 driver which explicitly sets the DMA_ATTR_NO_KERNEL_MAPPING flag.
> 
> Fixes: ae626eb97376 ("ARM/dma-mapping: use dma-direct unconditionally")
> Signed-off-by: Christian Meissl <meissl.christian@...il.com>
[...]
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index 88c2d68a69c9..bde7ae4ba31a 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -1821,3 +1821,11 @@ void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
>  {
>         __arm_dma_free(dev, size, cpu_addr, dma_handle, attrs, false);
>  }
> +
> +void arch_dma_prep_coherent(struct page *page, size_t size)
> +{
> +       void *ptr = page_address(page);
> +
> +       dmac_flush_range(ptr, ptr + size);
> +       outer_flush_range(__pa(ptr), __pa(ptr) + size);
> +}

It probably doesn't make any difference in practice, FWIW arm64 only
does a clean rather than flush (clean+invalidate) here.

What I noticed is that arch_dma_prep_coherent() is only called for
lowmem pages, so doing page_address() is safe. However, I don't think we
have anything to flush the caches for highmem pages.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ