[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151112213339.GC23979@arm.com>
Date: Thu, 12 Nov 2015 21:33:39 +0000
From: Will Deacon <will.deacon@....com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>, mingo@...nel.org,
linux-kernel@...r.kernel.org, corbet@....net, mhocko@...nel.org,
dhowells@...hat.com, torvalds@...ux-foundation.org,
Michael Ellerman <mpe@...erman.id.au>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>
Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire()
On Thu, Nov 12, 2015 at 10:59:06AM -0800, Paul E. McKenney wrote:
> On Thu, Nov 12, 2015 at 08:33:02PM +0100, Oleg Nesterov wrote:
> > On 11/12, Peter Zijlstra wrote:
> > >
> > > On Thu, Nov 12, 2015 at 07:38:07PM +0100, Oleg Nesterov wrote:
> > > > It seems that PPC needs to define smp_mb__before_spinlock() as full mb(),
> > > > no?
> > >
> > > It does:
> > >
> > > arch/powerpc/include/asm/barrier.h:#define smp_mb__before_spinlock() smp_mb()
> >
> > Ah, indeed, thanks.
> >
> > And given that it also defines smp_mb__after_unlock_lock() as smp_mb(),
> > I am starting to understand how it can help to avoid the races with
> > spin_unlock_wait() in (for example) do_exit().
> >
> > But as Boqun has already mentioned, this means that mb__after_unlock_lock()
> > has the new meaning which should be documented.
> >
> > Hmm. And 12d560f4 "Privatize smp_mb__after_unlock_lock()" should be reverted
> > then ;)
>
> Surprisingly, this reverts cleanly against today's mainline, please see
> the patch below. Against my -rcu stack, not so much, but so it goes. ;-)
I think we ended up concluding that smp_mb__after_unlock_lock is indeed
required, but I don't think we should just resurrect the old definition,
which doesn't keep UNLOCK -> LOCK distinct from RELEASE -> ACQUIRE. I'm
still working on documenting the different types of transitivity that we
identified in that thread, but it's slow going.
Also, as far as spin_unlock_wait is concerned, it is neither a LOCK or
an UNLOCK and this barrier doesn't offer us anything. Sure, it might
work because PPC defines it as smp_mb(), but it doesn't help on arm64
and defining the macro is overkill for us in most places (i.e. RCU).
If we decide that the current usage of spin_unlock_wait is valid, then I
would much rather implement a version of it in the arm64 backend that
does something like:
1: ldrex r1, [&lock]
if r1 indicates that lock is taken, branch back to 1b
strex r1, [&lock]
if store failed, branch back to 1b
i.e. we don't just test the lock, but we also write it back atomically
if we discover that it's free. That would then clear the exclusive monitor
on any cores in the process of taking the lock and restore the ordering
that we need.
Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists