lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 8 Sep 2021 17:12:15 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Alan Stern <stern@...land.harvard.edu>
Cc:     alexander.shishkin@...ux.intel.com, hpa@...or.com,
        parri.andrea@...il.com, mingo@...nel.org, paulmck@...nel.org,
        vincent.weaver@...ne.edu, tglx@...utronix.de, jolsa@...hat.com,
        acme@...hat.com, torvalds@...ux-foundation.org,
        linux-kernel@...r.kernel.org, eranian@...gle.com, will@...nel.org,
        linux-tip-commits@...r.kernel.org
Subject: Re: [tip:locking/core] tools/memory-model: Add extra ordering for
 locks and remove it for ordinary release/acquire

On Wed, Sep 08, 2021 at 10:42:17AM -0400, Alan Stern wrote:
> On Wed, Sep 08, 2021 at 01:44:11PM +0200, Peter Zijlstra wrote:

> > > Is this an error/oversight of the memory model, or did I miss a subtlety
> > > somewhere?
> 
> There's the question of what we think the LKMM should do in principle, and 
> the question of how far it should go in mirroring the limitations of the 
> various kernel hardware implementations.  These are obviously separate 
> questions, but they both should influence the design of the memory model.  
> But to what extent?
> 
> Given:
> 
> 	spin_lock(&r);
> 	WRITE_ONCE(x, 1);
> 	spin_unlock(&r);
> 	spin_lock(&s);
> 	WRITE_ONCE(y, 1);
> 	spin_unlock(&s);
> 
> there is no reason _in theory_ why a CPU shouldn't reorder and interleave 
> the operations to get:
> 
> 	spin_lock(&r);
> 	spin_lock(&s);
> 	WRITE_ONCE(x, 1);
> 	WRITE_ONCE(y, 1);
> 	spin_unlock(&r);
> 	spin_unlock(&s);
> 
> (Of course, this wouldn't happen if some other CPU was holding the s lock 
> while waiting for r to be released.  In that case the spin loop for s above 
> wouldn't be able to end until after the unlock operation on r was complete, 
> so this reordering couldn't occur.  But if there was no such contention then 
> the reordering is possible in theory -- ignoring restrictions imposed by the 
> actual implementations of the operations.)
> 
> Given such a reordering, nothing will prevent other CPUs from observing the 
> write to y before the write to x.

To a very small degree the Risc-V implementation actually does some of
that. It allows the stores from unlock and lock to be observed out of
order. But in general we have very weak rules about where the store of
the lock is visible in any case.

(revisit the spin_is_locked() saga for more details there)

> > Hmm.. that argument isn't strong enough for Risc-V if I read that FENCE
> > thing right. That's just R->RW ordering, which doesn't constrain the
> > first WRITE_ONCE().
> > 
> > However, that spin_unlock has "fence rw, w" with a subsequent write. So
> > the whole thing then becomes something like:
> > 
> > 
> > 	WRITE_ONCE(x, 1);
> > 	FENCE RW, W
> > 	WRITE_ONCE(s.lock, 0);
> > 	AMOSWAP %0, 1, r.lock
> > 	FENCE R, WR
> > 	WRITE_ONCE(y, 1);
> > 
> > 
> > Which I think is still sufficient, irrespective of the whole s!=r thing.
> 
> To me, this argument feels like an artificial, unintended consequence of the 
> individual implementations, not something that should be considered a 
> systematic architectural requirement.  Perhaps one could say the same thing 
> about the case where the two spinlock_t variables are the same, but at least 
> in that case there is a good argument for inherent ordering of atomic 
> accesses to a single variable.

Possibly :-) The way I got here is that my brain seems to have produced
the rule that UNLOCK+LOCK -> TSO order (an improvement, because for a
time it said SC), and it completely forgot about this subtlely. And in
general I feel that less subtlety is more better, but I understand your
counter argument :/

In any case, it looks like we had to put an smp_mb() in there anyway due
to other reasons and the whole argument is moot again.

I'll try and remember for next time :-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ