lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 5 Jul 2018 09:56:02 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Will Deacon <will.deacon@....com>
Cc:     Daniel Lustig <dlustig@...dia.com>,
        Alan Stern <stern@...land.harvard.edu>,
        Andrea Parri <andrea.parri@...rulasolutions.com>,
        LKMM Maintainers -- Akira Yokosawa <akiyks@...il.com>,
        Boqun Feng <boqun.feng@...il.com>,
        David Howells <dhowells@...hat.com>,
        Jade Alglave <j.alglave@....ac.uk>,
        Luc Maranget <luc.maranget@...ia.fr>,
        Nicholas Piggin <npiggin@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Kernel development list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] tools/memory-model: Add write ordering by
 release-acquire and by locks

On Thu, Jul 05, 2018 at 05:22:26PM +0100, Will Deacon wrote:
> On Thu, Jul 05, 2018 at 08:44:39AM -0700, Daniel Lustig wrote:
> > On 7/5/2018 8:31 AM, Paul E. McKenney wrote:
> > > On Thu, Jul 05, 2018 at 10:21:36AM -0400, Alan Stern wrote:
> > >> At any rate, it looks like instead of strengthening the relation, I
> > >> should write a patch that removes it entirely.  I also will add new,
> > >> stronger relations for use with locking, essentially making spin_lock
> > >> and spin_unlock be RCsc.
> > > 
> > > Only in the presence of smp_mb__after_unlock_lock() or
> > > smp_mb__after_spinlock(), correct?  Or am I confused about RCsc?
> > > 
> > > 							Thanx, Paul
> > > 
> > 
> > In terms of naming...is what you're asking for really RCsc?  To me,
> > that would imply that even stores in the first critical section would
> > need to be ordered before loads in the second critical section.
> > Meaning that even x86 would need an mfence in either lock() or unlock()?
> 
> I think a LOCK operation always implies an atomic RmW, which will give
> full ordering guarantees on x86. I know there have been interesting issues
> involving I/O accesses in the past, but I think that's still out of scope
> for the memory model.
> 
> Peter will know.

Agreed, x86 locked operations imply full fences, so x86 will order the
accesses in consecutive critical sections with respect to an observer
not holding the lock, even stores in earlier critical sections against
loads in later critical sections.  We have been discussing tightening
LKMM to make an unlock-lock pair order everything except earlier stores
vs. later loads.  (Of course, if everyone holds the lock, they will see
full ordering against both earlier and later critical sections.)

Or are you pushing for something stronger?

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ