[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1470787537.3015.83.camel@kernel.crashing.org>
Date: Wed, 10 Aug 2016 10:05:37 +1000
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: Manfred Spraul <manfred@...orfullife.com>,
Michael Ellerman <mpe@...erman.id.au>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Davidlohr Bueso <davidlohr@...com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Susi Sonnenschein <1vier1@....de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: spin_lock implicit/explicit memory barrier
On Tue, 2016-08-09 at 20:52 +0200, Manfred Spraul wrote:
> Hi Benjamin, Hi Michael,
>
> regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to
> arch_spin_is_locked()"):
>
> For the ipc/sem code, I would like to replace the spin_is_locked() with
> a smp_load_acquire(), see:
>
> http://git.cmpxchg.org/cgit.cgi/linux-mmots.git/tree/ipc/sem.c#n367
>
> http://www.ozlabs.org/~akpm/mmots/broken-out/ipc-semc-fix-complex_count-vs-simple-op-race.patch
>
> To my understanding, I must now add a smp_mb(), otherwise it would be
> broken on PowerPC:
>
> The approach that the memory barrier is added into spin_is_locked()
> doesn't work because the code doesn't use spin_is_locked().
>
> Correct?
Right, otherwise you aren't properly ordered. The current powerpc locks provide
good protection between what's inside vs. what's outside the lock but not vs.
the lock *value* itself, so if, like you do in the sem code, use the lock
value as something that is relevant in term of ordering, you probably need
an explicit full barrier.
Adding Paul McKenney.
Cheers,
Ben.
Powered by blists - more mailing lists