lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.2002271138080.1730-100000@iolanthe.rowland.org>
Date:   Thu, 27 Feb 2020 11:38:23 -0500 (EST)
From:   Alan Stern <stern@...land.harvard.edu>
To:     Boqun Feng <boqun.feng@...il.com>
cc:     linux-kernel@...r.kernel.org,
        Andrea Parri <parri.andrea@...il.com>,
        Will Deacon <will@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Nicholas Piggin <npiggin@...il.com>,
        David Howells <dhowells@...hat.com>,
        Jade Alglave <j.alglave@....ac.uk>,
        Luc Maranget <luc.maranget@...ia.fr>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Akira Yokosawa <akiyks@...il.com>,
        Daniel Lustig <dlustig@...dia.com>,
        Jonathan Corbet <corbet@....net>,
        Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Rob Herring <robh@...nel.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Jonathan Cameron <Jonathan.Cameron@...wei.com>,
        <linux-arch@...r.kernel.org>, <linux-doc@...r.kernel.org>
Subject: Re: [PATCH v3 5/5] Documentation/locking/atomic: Add a litmus test
 smp_mb__after_atomic()

On Thu, 27 Feb 2020, Boqun Feng wrote:

> We already use a litmus test in atomic_t.txt to describe atomic RMW +
> smp_mb__after_atomic() is stronger than acquire (both the read and the
> write parts are ordered). So make it a litmus test in atomic-tests
> directory, so that people can access the litmus easily.
> 
> Additionally, change the processor numbers "P1, P2" to "P0, P1" in
> atomic_t.txt for the consistency with the processor numbers in the
> litmus test, which herd can handle.
> 
> Signed-off-by: Boqun Feng <boqun.feng@...il.com>
> ---

Acked-by: Alan Stern <stern@...land.harvard.edu>


>  ...ter_atomic-is-stronger-than-acquire.litmus | 32 +++++++++++++++++++
>  Documentation/atomic-tests/README             |  5 +++
>  Documentation/atomic_t.txt                    | 10 +++---
>  3 files changed, 42 insertions(+), 5 deletions(-)
>  create mode 100644 Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
> 
> diff --git a/Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus b/Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
> new file mode 100644
> index 000000000000..9a8e31a44b28
> --- /dev/null
> +++ b/Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
> @@ -0,0 +1,32 @@
> +C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
> +
> +(*
> + * Result: Never
> + *
> + * Test that an atomic RMW followed by a smp_mb__after_atomic() is
> + * stronger than a normal acquire: both the read and write parts of
> + * the RMW are ordered before the subsequential memory accesses.
> + *)
> +
> +{
> +}
> +
> +P0(int *x, atomic_t *y)
> +{
> +	int r0;
> +	int r1;
> +
> +	r0 = READ_ONCE(*x);
> +	smp_rmb();
> +	r1 = atomic_read(y);
> +}
> +
> +P1(int *x, atomic_t *y)
> +{
> +	atomic_inc(y);
> +	smp_mb__after_atomic();
> +	WRITE_ONCE(*x, 1);
> +}
> +
> +exists
> +(0:r0=1 /\ 0:r1=0)
> diff --git a/Documentation/atomic-tests/README b/Documentation/atomic-tests/README
> index a1b72410b539..714cf93816ea 100644
> --- a/Documentation/atomic-tests/README
> +++ b/Documentation/atomic-tests/README
> @@ -7,5 +7,10 @@ tools/memory-model/README.
>  LITMUS TESTS
>  ============
>  
> +Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
> +	Test that an atomic RMW followed by a smp_mb__after_atomic() is
> +	stronger than a normal acquire: both the read and write parts of
> +	the RMW are ordered before the subsequential memory accesses.
> +
>  Atomic-RMW-ops-are-atomic-WRT-atomic_set.litmus
>  	Test that atomic_set() cannot break the atomicity of atomic RMWs.
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index 67d1d99f8589..0f1fdedf36bb 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -233,19 +233,19 @@ as well. Similarly, something like:
>  is an ACQUIRE pattern (though very much not typical), but again the barrier is
>  strictly stronger than ACQUIRE. As illustrated:
>  
> -  C strong-acquire
> +  C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
>  
>    {
>    }
>  
> -  P1(int *x, atomic_t *y)
> +  P0(int *x, atomic_t *y)
>    {
>      r0 = READ_ONCE(*x);
>      smp_rmb();
>      r1 = atomic_read(y);
>    }
>  
> -  P2(int *x, atomic_t *y)
> +  P1(int *x, atomic_t *y)
>    {
>      atomic_inc(y);
>      smp_mb__after_atomic();
> @@ -253,14 +253,14 @@ strictly stronger than ACQUIRE. As illustrated:
>    }
>  
>    exists
> -  (r0=1 /\ r1=0)
> +  (0:r0=1 /\ 0:r1=0)
>  
>  This should not happen; but a hypothetical atomic_inc_acquire() --
>  (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
>  because it would not order the W part of the RMW against the following
>  WRITE_ONCE.  Thus:
>  
> -  P1			P2
> +  P0			P1
>  
>  			t = LL.acq *y (0)
>  			t++;
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ