[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1802281004220.1421-100000@iolanthe.rowland.org>
Date: Wed, 28 Feb 2018 10:16:34 -0500 (EST)
From: Alan Stern <stern@...land.harvard.edu>
To: Will Deacon <will.deacon@....com>
cc: Andrea Parri <parri.andrea@...il.com>,
<linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>,
Nicholas Piggin <npiggin@...il.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Akira Yokosawa <akiyks@...il.com>
Subject: Re: [PATCH] Documentation/locking: Document the semantics of
spin_is_locked()
On Wed, 28 Feb 2018, Will Deacon wrote:
> On Wed, Feb 28, 2018 at 11:39:32AM +0100, Andrea Parri wrote:
> > There appeared to be a certain, recurrent uncertainty concerning the
> > semantics of spin_is_locked(), likely a consequence of the fact that
> > this semantics remains undocumented or that it has been historically
> > linked to the (likewise unclear) semantics of spin_unlock_wait().
> >
> > Document this semantics.
> >
> > Signed-off-by: Andrea Parri <parri.andrea@...il.com>
> > Cc: Alan Stern <stern@...land.harvard.edu>
> > Cc: Will Deacon <will.deacon@....com>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: Boqun Feng <boqun.feng@...il.com>
> > Cc: Nicholas Piggin <npiggin@...il.com>
> > Cc: David Howells <dhowells@...hat.com>
> > Cc: Jade Alglave <j.alglave@....ac.uk>
> > Cc: Luc Maranget <luc.maranget@...ia.fr>
> > Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
> > Cc: Akira Yokosawa <akiyks@...il.com>
> > ---
> > include/linux/spinlock.h | 11 +++++++++++
> > 1 file changed, 11 insertions(+)
> >
> > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> > index 4894d322d2584..2639fdc9a916c 100644
> > --- a/include/linux/spinlock.h
> > +++ b/include/linux/spinlock.h
> > @@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
> > raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
> > })
> >
> > +/**
> > + * spin_is_locked() - Check whether a spinlock is locked.
> > + * @lock: Pointer to the spinlock.
> > + *
> > + * This function is NOT required to provide any memory ordering
> > + * guarantees; it could be used for debugging purposes or, when
> > + * additional synchronization is needed, accompanied with other
> > + * constructs (memory barriers) enforcing the synchronization.
> > + *
> > + * Return: 1, if @lock is (found to be) locked; 0, otherwise.
> > + */
>
> I also don't think this is quite right, since the spin_is_locked check
> must be ordered after all prior lock acquisitions (to any lock) on the same
> CPU. That's why we have an smp_mb() in there on arm64 (see 38b850a73034f).
Pardon me for being a little slow ... I often have trouble
understanding what people mean when they talk about ordering.
In this case it sounds like you're referring to situations where we
want to avoid ABBA deadlocks, and we want to avoid the overhead of
actually taking a lock when all that's really needed is to check that
nobody else owns the lock.
spinlock_t a, b;
P0() {
spin_lock(&a);
while (spin_is_locked(&b))
cpu_relax();
/* Assert P1 isn't in its critical section and won't enter it */
...
spin_unlock(&a);
}
P1() {
spin_lock(&b);
if (spin_is_locked(&a)) {
spin_unlock(&b); /* P0 wants us not to do anything */
return;
}
...
spin_unlock(&b);
}
Is this what you meant?
If so, the basic pattern is essentially SB. Taking a lock involves
doing a store, and testing a lock involves doing a read. So ignoring
all the extraneous stuff and any memory barriers, this looks like:
P0() {
WRITE_ONCE(a, 1);
r0 = READ_ONCE(b);
}
P1() {
WRITE_ONCE(b, 1);
r1 = READ_ONCE(a);
}
exists (0:r0=0 /\ 1:r1=0)
And of course it is well known that the SB pattern requires full memory
barriers on both sides. Hence the original code should have been
written more like this:
P0() {
spin_lock(&a);
smp_mb__after_spinlock();
while (spin_is_locked(&b))
cpu_relax();
/* Assert P1 won't enter its critical section */
...
spin_unlock(&a);
}
P1() {
spin_lock(&b);
smp_mb__after_spinlock();
if (spin_is_locked(&a)) {
spin_unlock(&b); /* P0 wants us not to do anything */
return;
}
...
spin_unlock(&b);
}
with no need for any particular ordering of the spin_is_locked calls.
Or have I completely misunderstood your point?
Alan
Powered by blists - more mailing lists