lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 11 Apr 2018 09:19:56 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     linux-kernel@...r.kernel.org,
        Alan Stern <stern@...land.harvard.edu>,
        Andrea Parri <parri.andrea@...il.com>,
        Will Deacon <will.deacon@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        Boqun Feng <boqun.feng@...il.com>,
        Nicholas Piggin <npiggin@...il.com>,
        David Howells <dhowells@...hat.com>,
        Jade Alglave <j.alglave@....ac.uk>,
        Luc Maranget <luc.maranget@...ia.fr>,
        Akira Yokosawa <akiyks@...il.com>
Subject: Re: [PATCH] memory-model: fix cheat sheet typo

On Wed, Apr 11, 2018 at 01:15:58PM +0200, Paolo Bonzini wrote:
> On 10/04/2018 23:34, Paul E. McKenney wrote:
> > Glad it helps, and I have queued it for the next merge window.  Of course,
> > if a further improvement comes to mind, please do not keep it a secret.  ;-)
> 
> Yes, there are several changes that could be included:
> 
> - SV could be added to the prior operation case as well?  It should be
> symmetric
> 
> - The *_relaxed() case also applies to void RMW
> 
> - smp_store_mb() is missing
> 
> - smp_rmb() orders prior reads fully against subsequent RMW because SV
> applies between the two parts of the RMW; likewise smp_wmb() orders prior
> RMW fully against subsequent writes
> 
> 
> I am going submit these changes separately, but before doing that I can show
> also my rewrite of the cheat sheet.
> 
> The advantage is that, at least to me, it's clearer (and gets rid of
> "Self" :)).
> 
> The disadvantage is that it's much longer---almost twice the lines, even if
> you discount the splitting out of cumulative/propagating to a separate table
> (which in turn is because to me it's a different level of black magic).
> 
> ---------------------
> Memory operations are listed in this document as follows:
> 
> 	R:	Read portion of RMW
> 	W:	Write portion of RMW
> 	DR:	Dependent read (address dependency)
> 	DW:	Dependent write (address, data, or control dependency)
> 	RMW:	Atomic read-modify-write operation
> 	SV	Other accesses to the same variable
> 
> 
> Memory access operations order other memory operations against themselves as
> follows:
> 
>                                    Prior Operation   Subsequent Operation
>                                    ---------------   ---------------------
>                                    R  W  RMW  SV     R  W  DR  DW  RMW  SV
>                                    -  -  ---  --     -  -  --  --  ---  --
> Store, e.g., WRITE_ONCE()                      Y                         Y
> Load, e.g., READ_ONCE()                        Y            Y   Y        Y
> Unsuccessful RMW operation                     Y            Y   Y        Y
> *_relaxed() or void RMW operation              Y            Y   Y        Y
> rcu_dereference()                              Y            Y   Y        Y
> Successful *_acquire()                         Y      r  r  r   r    r   Y
> Successful *_release()             w  w    w   Y                         Y
> smp_store_mb()                     Y  Y    Y   Y      Y  Y   Y   Y   Y   Y
> Successful full non-void RMW       Y  Y    Y   Y      Y  Y   Y   Y   Y   Y
> 
> Key:	Y:	Memory operation provides ordering
> 	r:	Cannot move past the read portion of the *_acquire()
> 	w:	Cannot move past the write portion of the *_release()
> 
> 
> Memory barriers order prior memory operations against subsequent memory
> operations.  Two operations are ordered if both have non-empty cells in
> the following table:
> 
>                                    Prior Operation   Subsequent Operation
>                                    ---------------   --------------------
>                                    R  W  RMW         R  W  DR  DW  RMW
>                                    -  -  ---         -  -  --  --  ---
> smp_rmb()                          Y      r          Y      Y       Y
> smp_wmb()                             Y   Y             Y       Y   w
> smp_mb() & synchronize_rcu()       Y  Y   Y          Y  Y   Y   Y   Y
> smp_mb__before_atomic()            Y  Y   Y          a  a   a   a   Y
> smp_mb__after_atomic()             a  a   Y          Y  Y   Y   Y
> 
> 
> Key:	Y:	Barrier provides ordering
> 	r:	Barrier provides ordering against the read portion of RMW
> 	w:	Barrier provides ordering against the write portion of RMW
> 	a:	Barrier provides ordering given intervening RMW atomic operation
> 
> 
> Finally the following describes which operations provide cumulative and
> propagating fences:
> 
>                                      Cumulative         Propagates
>                                      ----------         ----------
> Store, e.g., WRITE_ONCE()
> Load, e.g., READ_ONCE()
> Unsuccessful RMW operation
> *_relaxed() or void RMW operation
> rcu_dereference()
> Successful *_acquire()
> Successful *_release()                   Y
> smp_store_mb()                           Y                  Y
> Successful full non-void RMW             Y                  Y
> smp_rmb()
> smp_wmb()
> smp_mb() & synchronize_rcu()             Y                  Y
> smp_mb__before_atomic()                  Y                  Y
> smp_mb__after_atomic()                   Y                  Y
> ----------
> 
> Perhaps you can see some obvious improvements.  Otherwise I'll send patches
> for the above issues.

Splitting it as you have done might indeed have some advantages.  What do
others think?

On the last table, would it make sense to leave out the rows having neither
"Cumulative" nor "Propagates"?

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ