[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1807101039310.1449-100000@iolanthe.rowland.org>
Date: Tue, 10 Jul 2018 10:48:44 -0400 (EDT)
From: Alan Stern <stern@...land.harvard.edu>
To: Andrea Parri <andrea.parri@...rulasolutions.com>
cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
LKMM Maintainers -- Akira Yokosawa <akiyks@...il.com>,
Boqun Feng <boqun.feng@...il.com>,
Daniel Lustig <dlustig@...dia.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will.deacon@....com>,
Kernel development list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] tools/memory-model: Add extra ordering for locks and
remove it for ordinary release/acquire
On Tue, 10 Jul 2018, Andrea Parri wrote:
> On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> > More than one kernel developer has expressed the opinion that the LKMM
> > should enforce ordering of writes by locking. In other words, given
>
> I'd like to step back on this point: I still don't have a strong opinion
> on this, but all this debating made me curious about others' opinion ;-)
> I'd like to see the above argument expanded: what's the rationale behind
> that opinion? can we maybe add references to actual code relying on that
> ordering? other that I've been missing?
>
> I'd extend these same questions to the "ordering of reads" snippet below
> (and discussed since so long...).
>
>
> > the following code:
> >
> > WRITE_ONCE(x, 1);
> > spin_unlock(&s):
> > spin_lock(&s);
> > WRITE_ONCE(y, 1);
> >
> > the stores to x and y should be propagated in order to all other CPUs,
> > even though those other CPUs might not access the lock s. In terms of
> > the memory model, this means expanding the cumul-fence relation.
> >
> > Locks should also provide read-read (and read-write) ordering in a
> > similar way. Given:
> >
> > READ_ONCE(x);
> > spin_unlock(&s);
> > spin_lock(&s);
> > READ_ONCE(y); // or WRITE_ONCE(y, 1);
> >
> > the load of x should be executed before the load of (or store to) y.
> > The LKMM already provides this ordering, but it provides it even in
> > the case where the two accesses are separated by a release/acquire
> > pair of fences rather than unlock/lock. This would prevent
> > architectures from using weakly ordered implementations of release and
> > acquire, which seems like an unnecessary restriction. The patch
> > therefore removes the ordering requirement from the LKMM for that
> > case.
>
> IIUC, the same argument could be used to support the removal of the new
> unlock-rf-lock-po (we already discussed riscv .aq/.rl, it doesn't seem
> hard to imagine an arm64 LDAPR-exclusive, or the adoption of ctrl+isync
> on powerpc). Why are we effectively preventing their adoption? Again,
> I'd like to see more details about the underlying motivations...
>
>
> >
> > All the architectures supported by the Linux kernel (including RISC-V)
> > do provide this ordering for locks, albeit for varying reasons.
> > Therefore this patch changes the model in accordance with the
> > developers' wishes.
> >
> > Signed-off-by: Alan Stern <stern@...land.harvard.edu>
> >
> > ---
> >
> > v.2: Restrict the ordering to lock operations, not general release
> > and acquire fences.
>
> This is another controversial point, and one that makes me shivering ...
>
> I have the impression that we're dismissing the suggestion "RMW-acquire
> at par with LKR" with a bit of rush. So, this patch is implying that:
>
> while (cmpxchg_acquire(&s, 0, 1) != 0)
> cpu_relax();
>
> is _not_ a valid implementation of spin_lock()! or, at least, it is not
> when paired with an smp_store_release().
At least, it's not a valid general-purpose implementation. For a lot
of architectures it would be okay, but it might not be okay (for
example) on RISC-V.
> Will was anticipating inserting
> arch hooks into the (generic) qspinlock code, when we know that similar
> patterns are spread all over in (q)rwlocks, mutexes, rwsem, ... (please
> also notice that the informal documentation is currently treating these
> synchronization mechanisms equally as far as "ordering" is concerned...).
>
> This distinction between locking operations and "other acquires" appears
> to me not only unmotivated but also extremely _fragile (difficult to use
> /maintain) when considering the analysis of synchronization mechanisms
> such as those mentioned above or their porting for new arch.
I will leave these points for others to discuss.
> memory-barriers.txt seems to also need an update on this regard: e.g.,
> "VARIETIES OF MEMORY BARRIERS" currently has:
>
> ACQUIRE operations include LOCK operations and both smp_load_acquire()
> and smp_cond_acquire() operations. [BTW, the latter was replaced by
> smp_cond_load_acquire() in 1f03e8d2919270 ...]
>
> RELEASE operations include UNLOCK operations and smp_store_release()
> operations. [...]
>
> [...] after an ACQUIRE on a given variable, all memory accesses
> preceding any prior RELEASE on that same variable are guaranteed
> to be visible.
As far as I can see, these statements remain valid.
> Please see also "LOCK ACQUISITION FUNCTIONS".
The (3) and (4) entries in that section's list seem redundant.
However, we should point out that one of the reorderings discussed
later on in that section would be disallowed if the RELEASE and ACQUIRE
were locking actions.
> > +
> > + int x, y;
> > + spinlock_t x;
> > +
> > + P0()
> > + {
> > + spin_lock(&s);
> > + WRITE_ONCE(x, 1);
> > + spin_unlock(&s);
> > + }
> > +
> > + P1()
> > + {
> > + int r1;
> > +
> > + spin_lock(&s);
> > + r1 = READ_ONCE(x);
> > + WRITE_ONCE(y, 1);
> > + spin_unlock(&s);
> > + }
> > +
> > + P2()
> > + {
> > + int r2, r3;
> > +
> > + r2 = READ_ONCE(y);
> > + smp_rmb();
> > + r3 = READ_ONCE(x);
> > + }
>
> Commit 047213158996f2 in -rcu/dev used the above test to illustrate a
> property of smp_mb__after_spinlock(), c.f., its header comment; if we
> accept this patch, we should consider updating that comment.
Indeed, the use of smb_mp__after_spinlock() illustrated in that comment
would become unnecessary.
Alan
Powered by blists - more mailing lists