lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <601437ae-2860-c48a-aa7c-4da37aeb6256@arm.com>
Date:   Mon, 18 Sep 2017 10:44:54 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Huacai Chen <chenhc@...ote.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Fuxin Zhang <zhangfx@...ote.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [V5, 2/3] mm: dmapool: Align to ARCH_DMA_MINALIGN in non-coherent
 DMA mode

On 18/09/17 05:22, Huacai Chen wrote:
> In non-coherent DMA mode, kernel uses cache flushing operations to
> maintain I/O coherency, so the dmapool objects should be aligned to
> ARCH_DMA_MINALIGN. Otherwise, it will cause data corruption, at least
> on MIPS:
> 
> 	Step 1, dma_map_single
> 	Step 2, cache_invalidate (no writeback)
> 	Step 3, dma_from_device
> 	Step 4, dma_unmap_single

This is a massive red warning flag for the whole series, because DMA
pools don't work like that. At best, this will do nothing, and at worst
it is papering over egregious bugs elsewhere. Streaming mappings of
coherent allocations means completely broken code.

> If a DMA buffer and a kernel structure share a same cache line, and if
> the kernel structure has dirty data, cache_invalidate (no writeback)
> will cause data lost.

DMA pools are backed by coherent allocations, and those should already
be at *page* granularity, so this doubly cannot happen for correct code.

More generally, the whole point of having the DMA APIs is that drivers
and subsystems should not have to be aware of details like hardware
coherency. Besides, cache line sharing that could pose a correctness
issue for non-hardware-coherent systems could still be a performance
issue in the presence of hardware coherency (due to unnecessary line
migration), so there's still an argument for not treating them differently.

Robin.

> Cc: stable@...r.kernel.org
> Signed-off-by: Huacai Chen <chenhc@...ote.com>
> ---
>  mm/dmapool.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/mm/dmapool.c b/mm/dmapool.c
> index 4d90a64..6263905 100644
> --- a/mm/dmapool.c
> +++ b/mm/dmapool.c
> @@ -140,6 +140,9 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
>  	else if (align & (align - 1))
>  		return NULL;
>  
> +	if (!device_is_coherent(dev))
> +		align = max_t(size_t, align, dma_get_cache_alignment());
> +
>  	if (size == 0)
>  		return NULL;
>  	else if (size < 4)
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ