lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 11 Jul 2018 10:43:11 +0100
From:   Will Deacon <will.deacon@....com>
To:     Andrea Parri <andrea.parri@...rulasolutions.com>
Cc:     Alan Stern <stern@...land.harvard.edu>,
        "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
        LKMM Maintainers -- Akira Yokosawa <akiyks@...il.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Daniel Lustig <dlustig@...dia.com>,
        David Howells <dhowells@...hat.com>,
        Jade Alglave <j.alglave@....ac.uk>,
        Luc Maranget <luc.maranget@...ia.fr>,
        Nicholas Piggin <npiggin@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Kernel development list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] tools/memory-model: Add extra ordering for locks and
 remove it for ordinary release/acquire

On Tue, Jul 10, 2018 at 11:38:21AM +0200, Andrea Parri wrote:
> On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> > More than one kernel developer has expressed the opinion that the LKMM
> > should enforce ordering of writes by locking.  In other words, given
> 
> I'd like to step back on this point: I still don't have a strong opinion
> on this, but all this debating made me curious about others' opinion ;-)
> I'd like to see the above argument expanded: what's the rationale behind
> that opinion? can we maybe add references to actual code relying on that
> ordering? other that I've been missing?
> 
> I'd extend these same questions to the "ordering of reads" snippet below
> (and discussed since so long...).
> 
> 
> > the following code:
> > 
> > 	WRITE_ONCE(x, 1);
> > 	spin_unlock(&s):
> > 	spin_lock(&s);
> > 	WRITE_ONCE(y, 1);
> > 
> > the stores to x and y should be propagated in order to all other CPUs,
> > even though those other CPUs might not access the lock s.  In terms of
> > the memory model, this means expanding the cumul-fence relation.
> > 
> > Locks should also provide read-read (and read-write) ordering in a
> > similar way.  Given:
> > 
> > 	READ_ONCE(x);
> > 	spin_unlock(&s);
> > 	spin_lock(&s);
> > 	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
> > 
> > the load of x should be executed before the load of (or store to) y.
> > The LKMM already provides this ordering, but it provides it even in
> > the case where the two accesses are separated by a release/acquire
> > pair of fences rather than unlock/lock.  This would prevent
> > architectures from using weakly ordered implementations of release and
> > acquire, which seems like an unnecessary restriction.  The patch
> > therefore removes the ordering requirement from the LKMM for that
> > case.
> 
> IIUC, the same argument could be used to support the removal of the new
> unlock-rf-lock-po (we already discussed riscv .aq/.rl, it doesn't seem
> hard to imagine an arm64 LDAPR-exclusive, or the adoption of ctrl+isync
> on powerpc).  Why are we effectively preventing their adoption?  Again,
> I'd like to see more details about the underlying motivations...
> 
> 
> > 
> > All the architectures supported by the Linux kernel (including RISC-V)
> > do provide this ordering for locks, albeit for varying reasons.
> > Therefore this patch changes the model in accordance with the
> > developers' wishes.
> > 
> > Signed-off-by: Alan Stern <stern@...land.harvard.edu>
> > 
> > ---
> > 
> > v.2: Restrict the ordering to lock operations, not general release
> > and acquire fences.
> 
> This is another controversial point, and one that makes me shivering ...
> 
> I have the impression that we're dismissing the suggestion "RMW-acquire
> at par with LKR" with a bit of rush.  So, this patch is implying that:
> 
> 	while (cmpxchg_acquire(&s, 0, 1) != 0)
> 		cpu_relax();
> 
> is _not_ a valid implementation of spin_lock()! or, at least, it is not
> when paired with an smp_store_release(). Will was anticipating inserting
> arch hooks into the (generic) qspinlock code,  when we know that similar
> patterns are spread all over in (q)rwlocks, mutexes, rwsem, ... (please
> also notice that the informal documentation is currently treating these
> synchronization mechanisms equally as far as "ordering" is concerned...).
> 
> This distinction between locking operations and "other acquires" appears
> to me not only unmotivated but also extremely _fragile (difficult to use
> /maintain) when considering the analysis of synchronization mechanisms
> such as those mentioned above or their porting for new arch.

The main reason for this is because developers use spinlocks all of the
time, including in drivers. It's less common to use explicit atomics and
extremely rare to use explicit acquire/release operations. So let's make
locks as easy to use as possible, by giving them the strongest semantics
that we can whilst remaining a good fit for the instructions that are
provided by the architectures we support.

If you want to extend this to atomic rmws, go for it, but I don't think
it's nearly as important and there will still be ways to implement locks
with insufficient ordering guarantees if you want to.

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ