[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180328110436.GR4043@hirez.programming.kicks-ass.net>
Date: Wed, 28 Mar 2018 13:04:36 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Michael Ellerman <mpe@...erman.id.au>
Cc: Andrea Parri <andrea.parri@...rulasolutions.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH for-4.17 2/2] powerpc: Remove smp_mb() from
arch_spin_is_locked()
On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
> That was tempting, but it leaves unfixed all the other potential
> callers, both in in-tree and out-of-tree and in code that's yet to be
> written.
So I myself don't care one teeny tiny bit about out of tree code, they
get to keep their pieces :-)
> Looking today nearly all the callers are debug code, where we probably
> don't need the barrier but we also don't care about the overhead of the
> barrier.
Still, code like:
WARN_ON_ONCE(!spin_is_locked(foo));
will unconditionally emit that SYNC. So you might want to be a little
careful.
> Documenting it would definitely be good, but even then I'd be inclined
> to leave the barrier in our implementation. Matching the documented
> behaviour is one thing, but the actual real-world behaviour on well
> tested platforms (ie. x86) is more important.
By that argument you should switch your spinlock implementation to RCpc
and include that SYNC in either lock or unlock already ;-)
Ideally we'd completely eradicate the *_is_locked() crud from the
kernel, not sure how feasable that really is, but it's a good goal. At
that point the whole issue of the barrier becomes moot of course.
Powered by blists - more mailing lists