[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180327102521.GA7347@andrea>
Date: Tue, 27 Mar 2018 12:25:21 +0200
From: Andrea Parri <andrea.parri@...rulasolutions.com>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH for-4.17 2/2] powerpc: Remove smp_mb() from
arch_spin_is_locked()
On Tue, Mar 27, 2018 at 11:06:56AM +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2018-03-26 at 12:37 +0200, Andrea Parri wrote:
> > Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
> > added an smp_mb() to arch_spin_is_locked(), in order to ensure that
> >
> > Thread 0 Thread 1
> >
> > spin_lock(A); spin_lock(B);
> > r0 = spin_is_locked(B) r1 = spin_is_locked(A);
> >
> > never ends up with r0 = r1 = 0, and reported one example (in ipc/sem.c)
> > relying on such guarantee.
> >
> > It's however understood (and undocumented) that spin_is_locked() is not
> > required to ensure such ordering guarantee,
>
> Shouldn't we start by documenting it ?
I do sympathize with your concern about the documentation! ;) The patch in
[1] was my (re)action to this concern; the sort of the patch is unclear to
me by this time (and I'm not aware of other proposals in this respect).
>
> > guarantee that is currently
> > _not_ provided by all implementations/arch, and that callers relying on
> > such ordering should instead use suitable memory barriers before acting
> > on the result of spin_is_locked().
> >
> > Following a recent auditing[1] of the callers of {,raw_}spin_is_locked()
> > revealing that none of them are relying on this guarantee anymore, this
> > commit removes the leading smp_mb() from the primitive thus effectively
> > reverting 51d7d5205d338.
>
> I would rather wait until it is properly documented. Debugging that IPC
> problem took a *LOT* of time and energy, I wouldn't want these issues
> to come and bite us again.
I understand. And I'm grateful for this debugging as well as for the (IMO)
excellent account of it you provided in 51d7d5205d338.
Said this ;) I cannot except myself from saying that I would probably have
resisted that solution (adding an smp_mb() in my arch_spin_is_locked), and
instead "blamed"/suggested that caller to fix his memory ordering...
Andrea
>
> > [1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
> >
> > Signed-off-by: Andrea Parri <andrea.parri@...rulasolutions.com>
> > Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
> > Cc: Paul Mackerras <paulus@...ba.org>
> > Cc: Michael Ellerman <mpe@...erman.id.au>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: Ingo Molnar <mingo@...hat.com>
> > Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> > ---
> > arch/powerpc/include/asm/spinlock.h | 1 -
> > 1 file changed, 1 deletion(-)
> >
> > diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> > index b9ebc3085fb79..ecc141e3f1a73 100644
> > --- a/arch/powerpc/include/asm/spinlock.h
> > +++ b/arch/powerpc/include/asm/spinlock.h
> > @@ -67,7 +67,6 @@ static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
> >
> > static inline int arch_spin_is_locked(arch_spinlock_t *lock)
> > {
> > - smp_mb();
> > return !arch_spin_value_unlocked(*lock);
> > }
> >
Powered by blists - more mailing lists