[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140911102356.GA6158@arm.com>
Date: Thu, 11 Sep 2014 11:23:56 +0100
From: Will Deacon <will.deacon@....com>
To: James Bottomley <James.Bottomley@...senPartnership.com>
Cc: Peter Hurley <peter@...leysoftware.com>,
"paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>,
"H. Peter Anvin" <hpa@...or.com>,
One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
Jakub Jelinek <jakub@...hat.com>,
Mikael Pettersson <mikpelinux@...il.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Richard Henderson <rth@...ddle.net>,
Oleg Nesterov <oleg@...hat.com>,
Miroslav Franc <mfranc@...hat.com>,
Paul Mackerras <paulus@...ba.org>,
"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
Tony Luck <tony.luck@...el.com>,
"linux-ia64@...r.kernel.org" <linux-ia64@...r.kernel.org>
Subject: Re: bit fields && data tearing
On Wed, Sep 10, 2014 at 10:48:06PM +0100, James Bottomley wrote:
> On Tue, 2014-09-09 at 06:40 -0400, Peter Hurley wrote:
> > >> The processor is free to re-order this to:
> > >>
> > >> STORE C
> > >> STORE B
> > >> UNLOCK
> > >>
> > >> That's because the unlock() only guarantees that:
> > >>
> > >> Stores before the unlock in program order are guaranteed to complete
> > >> before the unlock completes. Stores after the unlock _may_ complete
> > >> before the unlock completes.
> > >>
> > >> My point was that even if compiler barriers had the same semantics
> > >> as memory barriers, the situation would be no worse. That is, code
> > >> that is sensitive to memory barriers (like the example I gave above)
> > >> would merely have the same fragility with one-way compiler barriers
> > >> (with respect to the compiler combining writes).
> > >>
> > >> That's what I meant by "no worse than would otherwise exist".
> > >
> > > Actually, that's not correct. This is actually deja vu with me on the
> > > other side of the argument. When we first did spinlocks on PA, I argued
> > > as you did: lock only a barrier for code after and unlock for code
> > > before. The failing case is that you can have a critical section which
> > > performs an atomically required operation and a following unit which
> > > depends on it being performed. If you begin the following unit before
> > > the atomic requirement, you may end up losing. It turns out this kind
> > > of pattern is inherent in a lot of mail box device drivers: you need to
> > > set up the mailbox atomically then poke it. Setup is usually atomic,
> > > deciding which mailbox to prime and actually poking it is in the
> > > following unit. Priming often involves an I/O bus transaction and if
> > > you poke before priming, you get a misfire.
> >
> > Take it up with the man because this was discussed extensively last
> > year and it was decided that unlocks would not be full barriers.
> > Thus the changes to memory-barriers.txt that explicitly note this
> > and the addition of smp_mb__after_unlock_lock() (for two different
> > locks; an unlock followed by a lock on the same lock is a full barrier).
> >
> > Code that expects ordered writes after an unlock needs to explicitly
> > add the memory barrier.
>
> I don't really care what ARM does; spin locks are full barriers on
> architectures that need them. The driver problem we had that detected
> our semi permeable spinlocks was an LSI 53c875 which is enterprise class
> PCI, so presumably not relevant to ARM anyway.
FWIW, unlock is always fully ordered against non-relaxed IO accesses. We
have pretty heavy barriers in readX/writeX to ensure this on ARM/arm64.
PPC do tricks in their unlock to avoid the overhead on each IO access.
Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists