[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180226171008.GC30736@arm.com>
Date: Mon, 26 Feb 2018 17:10:08 +0000
From: Will Deacon <will.deacon@....com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Luc Maranget <luc.maranget@...ia.fr>,
Daniel Lustig <dlustig@...dia.com>,
Peter Zijlstra <peterz@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Andrea Parri <parri.andrea@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Palmer Dabbelt <palmer@...ive.com>,
Albert Ou <albert@...ive.com>,
Alan Stern <stern@...land.harvard.edu>,
Boqun Feng <boqun.feng@...il.com>,
Nicholas Piggin <npiggin@...il.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Akira Yokosawa <akiyks@...il.com>,
Ingo Molnar <mingo@...nel.org>, linux-riscv@...ts.infradead.org
Subject: Re: [RFC PATCH] riscv/locking: Strengthen spin_lock() and
spin_unlock()
On Mon, Feb 26, 2018 at 09:00:43AM -0800, Linus Torvalds wrote:
> On Mon, Feb 26, 2018 at 8:24 AM, Will Deacon <will.deacon@....com> wrote:
> >
> > Strictly speaking, that's not what we've got implemented on arm64: only
> > the read part of the RmW has Acquire semantics, but there is a total
> > order on the lock/unlock operations for the lock.
>
> Hmm.
>
> I thought we had exactly that bug on some architecture with the queued
> spinlocks, and people decided it was wrong.
>
> But it's possible that I mis-remember, and that we decided it was ok after all.
>
> > spin_lock(&lock);
> > WRITE_ONCE(foo, 42);
> >
> > then another CPU could do:
> >
> > if (smp_load_acquire(&foo) == 42)
> > BUG_ON(!spin_is_locked(&lock));
> >
> > and that could fire. Is that relied on somewhere?
>
> I have a distinct memory that we said the spinlock write is seen in
> order, wrt the writes inside the spinlock, and the reason was
> something very similar to the above, except that "spin_is_locked()"
> was about our spin_unlock_wait().
Yes, we did run into problems with spin_unlock_wait and we ended up
strengthening the arm64 implementation to do an RmW, which puts it into
the total order of lock/unlock operations. However, we then went and
killed the thing because it was seldom used correctly and we struggled
to define what "correctly" even meant!
> Because we had something very much like the above in the exit path,
> where we would look at some state and do "spin_unlock_wait()" and
> expect to be guaranteed to be the last user after that.
>
> But a few months ago we obviously got rid of spin_unlock_wait exactly
> because people were worried about the semantics.
Similarly for spin_can_lock.
> So maybe I just remember an older issue that simply became a non-issue
> with that.
I think so. If we need to, I could make spin_is_locked do an RmW on
arm64 so we can say that all successful spin_* operations are totally
ordered for a given lock, but spin_is_locked is normally only used as a
coarse debug check anyway where it's assumed that if it's held, it's
held by the current CPU. We should probably move most users over to
lockdep and see what we're left with.
Will
Powered by blists - more mailing lists