[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180706141445.GC3593@linux.vnet.ibm.com>
Date: Fri, 6 Jul 2018 07:14:45 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Will Deacon <will.deacon@....com>
Cc: Daniel Lustig <dlustig@...dia.com>,
Alan Stern <stern@...land.harvard.edu>,
Andrea Parri <andrea.parri@...rulasolutions.com>,
LKMM Maintainers -- Akira Yokosawa <akiyks@...il.com>,
Boqun Feng <boqun.feng@...il.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Kernel development list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] tools/memory-model: Add write ordering by
release-acquire and by locks
On Fri, Jul 06, 2018 at 10:25:29AM +0100, Will Deacon wrote:
> On Thu, Jul 05, 2018 at 09:56:02AM -0700, Paul E. McKenney wrote:
> > On Thu, Jul 05, 2018 at 05:22:26PM +0100, Will Deacon wrote:
> > > On Thu, Jul 05, 2018 at 08:44:39AM -0700, Daniel Lustig wrote:
> > > > On 7/5/2018 8:31 AM, Paul E. McKenney wrote:
> > > > > On Thu, Jul 05, 2018 at 10:21:36AM -0400, Alan Stern wrote:
> > > > >> At any rate, it looks like instead of strengthening the relation, I
> > > > >> should write a patch that removes it entirely. I also will add new,
> > > > >> stronger relations for use with locking, essentially making spin_lock
> > > > >> and spin_unlock be RCsc.
> > > > >
> > > > > Only in the presence of smp_mb__after_unlock_lock() or
> > > > > smp_mb__after_spinlock(), correct? Or am I confused about RCsc?
> > > > >
> > > > > Thanx, Paul
> > > > >
> > > >
> > > > In terms of naming...is what you're asking for really RCsc? To me,
> > > > that would imply that even stores in the first critical section would
> > > > need to be ordered before loads in the second critical section.
> > > > Meaning that even x86 would need an mfence in either lock() or unlock()?
> > >
> > > I think a LOCK operation always implies an atomic RmW, which will give
> > > full ordering guarantees on x86. I know there have been interesting issues
> > > involving I/O accesses in the past, but I think that's still out of scope
> > > for the memory model.
> > >
> > > Peter will know.
> >
> > Agreed, x86 locked operations imply full fences, so x86 will order the
> > accesses in consecutive critical sections with respect to an observer
> > not holding the lock, even stores in earlier critical sections against
> > loads in later critical sections. We have been discussing tightening
> > LKMM to make an unlock-lock pair order everything except earlier stores
> > vs. later loads. (Of course, if everyone holds the lock, they will see
> > full ordering against both earlier and later critical sections.)
> >
> > Or are you pushing for something stronger?
>
> I (and I think Peter) would like something stronger, but we can't have
> nice things ;)
There is a lot of that going around! ;-)
> Anyhow, that's not really related to this patch series, so sorry for
> mis-speaking and thanks to everybody who piled on with corrections! I got
> a bit arm-centric for a moment. I think Alan got the gist of it, so I'll
> wait to see what he posts.
Sounds good!
Thanx, Paul
Powered by blists - more mailing lists