lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 8 Sep 2021 10:42:17 -0400
From:   Alan Stern <stern@...land.harvard.edu>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     alexander.shishkin@...ux.intel.com, hpa@...or.com,
        parri.andrea@...il.com, mingo@...nel.org, paulmck@...nel.org,
        vincent.weaver@...ne.edu, tglx@...utronix.de, jolsa@...hat.com,
        acme@...hat.com, torvalds@...ux-foundation.org,
        linux-kernel@...r.kernel.org, eranian@...gle.com, will@...nel.org,
        linux-tip-commits@...r.kernel.org
Subject: Re: [tip:locking/core] tools/memory-model: Add extra ordering for
 locks and remove it for ordinary release/acquire

On Wed, Sep 08, 2021 at 01:44:11PM +0200, Peter Zijlstra wrote:
> On Wed, Sep 08, 2021 at 01:00:26PM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 02, 2018 at 03:11:10AM -0700, tip-bot for Alan Stern wrote:
> > > Commit-ID:  6e89e831a90172bc3d34ecbba52af5b9c4a447d1
> > > Gitweb:     https://git.kernel.org/tip/6e89e831a90172bc3d34ecbba52af5b9c4a447d1
> > > Author:     Alan Stern <stern@...land.harvard.edu>
> > > AuthorDate: Wed, 26 Sep 2018 11:29:17 -0700
> > > Committer:  Ingo Molnar <mingo@...nel.org>
> > > CommitDate: Tue, 2 Oct 2018 10:28:01 +0200
> > > 
> > > tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
> > > 
> > > More than one kernel developer has expressed the opinion that the LKMM
> > > should enforce ordering of writes by locking.  In other words, given
> > > the following code:
> > > 
> > > 	WRITE_ONCE(x, 1);
> > > 	spin_unlock(&s):
> > > 	spin_lock(&s);
> > > 	WRITE_ONCE(y, 1);
> > > 
> > > the stores to x and y should be propagated in order to all other CPUs,
> > > even though those other CPUs might not access the lock s.  In terms of
> > > the memory model, this means expanding the cumul-fence relation.
> > 
> > Let me revive this old thread... recently we ran into the case:
> > 
> > 	WRITE_ONCE(x, 1);
> > 	spin_unlock(&s):
> > 	spin_lock(&r);
> > 	WRITE_ONCE(y, 1);
> > 
> > which is distinct from the original in that UNLOCK and LOCK are not on
> > the same variable.
> > 
> > I'm arguing this should still be RCtso by reason of:
> > 
> >   spin_lock() requires an atomic-acquire which:
> > 
> >     TSO-arch)		implies smp_mb()
> >     ARM64)		is RCsc for any stlr/ldar
> >     Power)		LWSYNC
> >     Risc-V)		fence r , rw
> >     *)			explicitly has smp_mb()
> > 
> > 
> > However, Boqun points out that the memory model disagrees, per:
> > 
> >   https://lkml.kernel.org/r/YTI2UjKy+C7LeIf+@boqun-archlinux
> > 
> > Is this an error/oversight of the memory model, or did I miss a subtlety
> > somewhere?

There's the question of what we think the LKMM should do in principle, and 
the question of how far it should go in mirroring the limitations of the 
various kernel hardware implementations.  These are obviously separate 
questions, but they both should influence the design of the memory model.  
But to what extent?

Given:

	spin_lock(&r);
	WRITE_ONCE(x, 1);
	spin_unlock(&r);
	spin_lock(&s);
	WRITE_ONCE(y, 1);
	spin_unlock(&s);

there is no reason _in theory_ why a CPU shouldn't reorder and interleave 
the operations to get:

	spin_lock(&r);
	spin_lock(&s);
	WRITE_ONCE(x, 1);
	WRITE_ONCE(y, 1);
	spin_unlock(&r);
	spin_unlock(&s);

(Of course, this wouldn't happen if some other CPU was holding the s lock 
while waiting for r to be released.  In that case the spin loop for s above 
wouldn't be able to end until after the unlock operation on r was complete, 
so this reordering couldn't occur.  But if there was no such contention then 
the reordering is possible in theory -- ignoring restrictions imposed by the 
actual implementations of the operations.)

Given such a reordering, nothing will prevent other CPUs from observing the 
write to y before the write to x.

> Hmm.. that argument isn't strong enough for Risc-V if I read that FENCE
> thing right. That's just R->RW ordering, which doesn't constrain the
> first WRITE_ONCE().
> 
> However, that spin_unlock has "fence rw, w" with a subsequent write. So
> the whole thing then becomes something like:
> 
> 
> 	WRITE_ONCE(x, 1);
> 	FENCE RW, W
> 	WRITE_ONCE(s.lock, 0);
> 	AMOSWAP %0, 1, r.lock
> 	FENCE R, WR
> 	WRITE_ONCE(y, 1);
> 
> 
> Which I think is still sufficient, irrespective of the whole s!=r thing.

To me, this argument feels like an artificial, unintended consequence of the 
individual implementations, not something that should be considered a 
systematic architectural requirement.  Perhaps one could say the same thing 
about the case where the two spinlock_t variables are the same, but at least 
in that case there is a good argument for inherent ordering of atomic 
accesses to a single variable.

Alan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ