[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1802191158320.1361-100000@iolanthe.rowland.org>
Date: Mon, 19 Feb 2018 12:14:45 -0500 (EST)
From: Alan Stern <stern@...land.harvard.edu>
To: Andrea Parri <parri.andrea@...il.com>
cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Akira Yokosawa <akiyks@...il.com>,
Kernel development list <linux-kernel@...r.kernel.org>,
<mingo@...nel.org>, Will Deacon <will.deacon@....com>,
<peterz@...radead.org>, <boqun.feng@...il.com>,
<npiggin@...il.com>, <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
Patrick Bellasi <patrick.bellasi@....com>
Subject: Re: [PATCH] tools/memory-model: remove rb-dep, smp_read_barrier_depends,
and lockless_dereference
On Sat, 17 Feb 2018, Andrea Parri wrote:
> > Akira's observation about READ_ONCE extends to all (annotated) loads. In
> > fact, it also applies to loads corresponding to unsuccessful RMW operations;
> > consider, for example, the following variation of MP+onceassign+derefonce:
> >
> > C T
> >
> > {
> > y=z;
> > z=0;
> > }
> >
> > P0(int *x, int **y)
> > {
> > WRITE_ONCE(*x, 1);
> > smp_store_release(y, x);
> > }
> >
> > P1(int **y, int *z)
> > {
> > int *r0;
> > int r1;
> >
> > r0 = cmpxchg_relaxed(y, z, z);
> > r1 = READ_ONCE(*r0);
> > }
> >
> > exists (1:r0=x /\ 1:r1=0)
> >
> > The final state is allowed w/o the patch, and forbidden w/ the patch.
> >
> > This also reminds me of
> >
> > 5a8897cc7631fa544d079c443800f4420d1b173f
> > ("locking/atomics/alpha: Add smp_read_barrier_depends() to _release()/_relaxed() atomics")
> >
> > (that we probably want to mention in the commit message).
>
> Please also notice that 5a8897cc7631f only touched alpha's atomic.h:
> I see no corresponding commit/change on {,cmp}xchg.h (where the "mb"
> is currently conditionally executed).
This leaves us with a question: Do we want to change the kernel by
adding memory barriers after unsuccessful RMW operations on Alpha, or
do we want to change the model by excluding such operations from
address dependencies?
Note that operations like atomic_add_unless() already include memory
barriers.
Alan
Powered by blists - more mailing lists