[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1410155407.2027.29.camel@jarvis.lan>
Date: Sun, 07 Sep 2014 22:50:07 -0700
From: James Bottomley <James.Bottomley@...senPartnership.com>
To: Peter Hurley <peter@...leysoftware.com>
Cc: paulmck@...ux.vnet.ibm.com, "H. Peter Anvin" <hpa@...or.com>,
One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
Jakub Jelinek <jakub@...hat.com>,
Mikael Pettersson <mikpelinux@...il.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Richard Henderson <rth@...ddle.net>,
Oleg Nesterov <oleg@...hat.com>,
Miroslav Franc <mfranc@...hat.com>,
Paul Mackerras <paulus@...ba.org>,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org, Tony Luck <tony.luck@...el.com>,
linux-ia64@...r.kernel.org
Subject: Re: bit fields && data tearing
On Sun, 2014-09-07 at 16:41 -0400, Peter Hurley wrote:
> On 09/07/2014 03:04 PM, James Bottomley wrote:
> > On Sun, 2014-09-07 at 09:21 -0700, Paul E. McKenney wrote:
> >> On Sat, Sep 06, 2014 at 10:07:22PM -0700, James Bottomley wrote:
> >>> On Thu, 2014-09-04 at 21:06 -0700, Paul E. McKenney wrote:
> >>>> On Thu, Sep 04, 2014 at 10:47:24PM -0400, Peter Hurley wrote:
> >>>>> Hi James,
> >>>>>
> >>>>> On 09/04/2014 10:11 PM, James Bottomley wrote:
> >>>>>> On Thu, 2014-09-04 at 17:17 -0700, Paul E. McKenney wrote:
> >>>>>>> +And there are anti-guarantees:
> >>>>>>> +
> >>>>>>> + (*) These guarantees do not apply to bitfields, because compilers often
> >>>>>>> + generate code to modify these using non-atomic read-modify-write
> >>>>>>> + sequences. Do not attempt to use bitfields to synchronize parallel
> >>>>>>> + algorithms.
> >>>>>>> +
> >>>>>>> + (*) Even in cases where bitfields are protected by locks, all fields
> >>>>>>> + in a given bitfield must be protected by one lock. If two fields
> >>>>>>> + in a given bitfield are protected by different locks, the compiler's
> >>>>>>> + non-atomic read-modify-write sequences can cause an update to one
> >>>>>>> + field to corrupt the value of an adjacent field.
> >>>>>>> +
> >>>>>>> + (*) These guarantees apply only to properly aligned and sized scalar
> >>>>>>> + variables. "Properly sized" currently means "int" and "long",
> >>>>>>> + because some CPU families do not support loads and stores of
> >>>>>>> + other sizes. ("Some CPU families" is currently believed to
> >>>>>>> + be only Alpha 21064. If this is actually the case, a different
> >>>>>>> + non-guarantee is likely to be formulated.)
> >>>>>>
> >>>>>> This is a bit unclear. Presumably you're talking about definiteness of
> >>>>>> the outcome (as in what's seen after multiple stores to the same
> >>>>>> variable).
> >>>>>
> >>>>> No, the last conditions refers to adjacent byte stores from different
> >>>>> cpu contexts (either interrupt or SMP).
> >>>>>
> >>>>>> The guarantees are only for natural width on Parisc as well,
> >>>>>> so you would get a mess if you did byte stores to adjacent memory
> >>>>>> locations.
> >>>>>
> >>>>> For a simple test like:
> >>>>>
> >>>>> struct x {
> >>>>> long a;
> >>>>> char b;
> >>>>> char c;
> >>>>> char d;
> >>>>> char e;
> >>>>> };
> >>>>>
> >>>>> void store_bc(struct x *p) {
> >>>>> p->b = 1;
> >>>>> p->c = 2;
> >>>>> }
> >>>>>
> >>>>> on parisc, gcc generates separate byte stores
> >>>>>
> >>>>> void store_bc(struct x *p) {
> >>>>> 0: 34 1c 00 02 ldi 1,ret0
> >>>>> 4: 0f 5c 12 08 stb ret0,4(r26)
> >>>>> 8: 34 1c 00 04 ldi 2,ret0
> >>>>> c: e8 40 c0 00 bv r0(rp)
> >>>>> 10: 0f 5c 12 0a stb ret0,5(r26)
> >>>>>
> >>>>> which appears to confirm that on parisc adjacent byte data
> >>>>> is safe from corruption by concurrent cpu updates; that is,
> >>>>>
> >>>>> CPU 0 | CPU 1
> >>>>> |
> >>>>> p->b = 1 | p->c = 2
> >>>>> |
> >>>>>
> >>>>> will result in p->b == 1 && p->c == 2 (assume both values
> >>>>> were 0 before the call to store_bc()).
> >>>>
> >>>> What Peter said. I would ask for suggestions for better wording, but
> >>>> I would much rather be able to say that single-byte reads and writes
> >>>> are atomic and that aligned-short reads and writes are also atomic.
> >>>>
> >>>> Thus far, it looks like we lose only very old Alpha systems, so unless
> >>>> I hear otherwise, I update my patch to outlaw these very old systems.
> >>>
> >>> This isn't universally true according to the architecture manual. The
> >>> PARISC CPU can make byte to long word stores atomic against the memory
> >>> bus but not against the I/O bus for instance. Atomicity is a property
> >>> of the underlying substrate, not of the CPU. Implying that atomicity is
> >>> a CPU property is incorrect.
>
> To go back to this briefly, while it's true that atomicity is not
> a CPU property, atomicity is not possible if the CPU is not cooperating.
You mean if it doesn't have the bus logic to emit? Like ia32 mostly
can't do 64 bit writes ... sure.
> >> OK, fair point.
> >>
> >> But are there in-use-for-Linux PARISC memory fabrics (for normal memory,
> >> not I/O) that do not support single-byte and double-byte stores?
> >
> > For aligned access, I believe that's always the case for the memory bus
> > (on both 32 and 64 bit systems). However, it only applies to machine
> > instruction loads and stores of the same width.. If you mix the widths
> > on the loads and stores, all bets are off. That means you have to
> > beware of the gcc penchant for coalescing loads and stores: if it sees
> > two adjacent byte stores it can coalesce them into a short store
> > instead ... that screws up the atomicity guarantees.
>
> Two things: I think that gcc has given up on combining adjacent writes,
> perhaps because unaligned writes on some arches are prohibitive, so
> whatever minor optimization was believed to be gained was quickly lost,
> multi-fold. (Although this might not be the reason since one would
> expect that gcc would know the alignment of the promoted store).
Um, gcc assumes architecturally correct alignment; that's why it pads
structures. Only when accessing packed structures will it use the
lowest unit load/store.
if you have a struct { char a, char b }; and load first a then b with a
constant gcc will obligingly optimise to a short store.
> But additionally, even if gcc combines adjacent writes _that are part
> of the program flow_ then I believe the situation is no worse than
> would otherwise exist.
>
> For instance, given the following:
>
> struct x {
> spinlock_t lock;
> long a;
> byte b;
> byte c;
> };
>
> void locked_store_b(struct x *p)
> {
> spin_lock(&p->lock);
> p->b = 1;
> spin_unlock(&p->lock);
> p->c = 2;
> }
>
> Granted, the author probably expects ordered writes of
> STORE B
> STORE C
> but that's not guaranteed because there is no memory barrier
> ordering B before C.
Yes, there is: loads and stores may not migrate into or out of critical
sections.
> I see no problem with gcc write-combining in the absence of
> memory barriers (as long as alignment is still respected,
> ie., the size-promoted write is still naturally aligned). The
> combined write is still atomic.
Actual alignment is pretty irrelevant. That's why all architectures
which require alignment also have to implement misaligned traps ... this
is a fundamental requirement of the networking code, for instance.
The main problem is gcc thinking there's a misalignment (or a packed
structure). That causes splitting of loads and stores and that destroys
atomicity.
James
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists