lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 23 May 2022 12:38:28 +0100
From:   Mark Rutland <mark.rutland@....com>
To:     Kefeng Wang <wangkefeng.wang@...wei.com>
Cc:     elver@...gle.com, catalin.marinas@....com, will@...nel.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
        arnd@...db.de
Subject: Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()

On Mon, May 23, 2022 at 07:31:25PM +0800, Kefeng Wang wrote:
> The memory barrier dma_mb() is introduced by commit a76a37777f2c
> ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> which is used to ensure that prior (both reads and writes) accesses
> to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> 
> Reviewed-by: Arnd Bergmann <arnd@...db.de> # for asm-generic
> Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>

FWIW, this looks sane to me so:

  Acked-by: Mark Rutland <mark.rutland@....com>

I'll leave the final say to Will, as I assume this'll go via the arm64 tree and
he'll be the one picking this.

Mark.

> ---
>  Documentation/memory-barriers.txt | 11 ++++++-----
>  include/asm-generic/barrier.h     |  8 ++++++++
>  2 files changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index b12df9137e1c..832b5d36e279 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
>  
>   (*) dma_wmb();
>   (*) dma_rmb();
> + (*) dma_mb();
>  
>       These are for use with consistent memory to guarantee the ordering
>       of writes or reads of shared memory accessible to both the CPU and a
> @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
>       The dma_rmb() allows us guarantee the device has released ownership
>       before we read the data from the descriptor, and the dma_wmb() allows
>       us to guarantee the data is written to the descriptor before the device
> -     can see it now has ownership.  Note that, when using writel(), a prior
> -     wmb() is not needed to guarantee that the cache coherent memory writes
> -     have completed before writing to the MMIO region.  The cheaper
> -     writel_relaxed() does not provide this guarantee and must not be used
> -     here.
> +     can see it now has ownership.  The dma_mb() implies both a dma_rmb() and
> +     a dma_wmb().  Note that, when using writel(), a prior wmb() is not needed
> +     to guarantee that the cache coherent memory writes have completed before
> +     writing to the MMIO region.  The cheaper writel_relaxed() does not provide
> +     this guarantee and must not be used here.
>  
>       See the subsection "Kernel I/O barrier effects" for more information on
>       relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index fd7e8fbaeef1..961f4d88f9ef 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -38,6 +38,10 @@
>  #define wmb()	do { kcsan_wmb(); __wmb(); } while (0)
>  #endif
>  
> +#ifdef __dma_mb
> +#define dma_mb()	do { kcsan_mb(); __dma_mb(); } while (0)
> +#endif
> +
>  #ifdef __dma_rmb
>  #define dma_rmb()	do { kcsan_rmb(); __dma_rmb(); } while (0)
>  #endif
> @@ -65,6 +69,10 @@
>  #define wmb()	mb()
>  #endif
>  
> +#ifndef dma_mb
> +#define dma_mb()	mb()
> +#endif
> +
>  #ifndef dma_rmb
>  #define dma_rmb()	rmb()
>  #endif
> -- 
> 2.35.3
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ