[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190418183919.GO14111@linux.ibm.com>
Date: Thu, 18 Apr 2019 11:39:19 -0700
From: "Paul E. McKenney" <paulmck@...ux.ibm.com>
To: Alan Stern <stern@...land.harvard.edu>
Cc: Andrea Parri <andrea.parri@...rulasolutions.com>,
LKMM Maintainers -- Akira Yokosawa <akiyks@...il.com>,
Boqun Feng <boqun.feng@...il.com>,
Daniel Lustig <dlustig@...dia.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will.deacon@....com>,
Daniel Kroening <kroening@...ox.ac.uk>,
Kernel development list <linux-kernel@...r.kernel.org>
Subject: Re: Adding plain accesses and detecting data races in the LKMM
On Thu, Apr 18, 2019 at 01:44:36PM -0400, Alan Stern wrote:
> On Thu, 18 Apr 2019, Andrea Parri wrote:
>
> > > Another question is "should the kernel permit smp_mb__{before,after}*()
> > > anywhere other than immediately before or after the primitive being
> > > strengthened?"
> >
> > Mmh, I do think that keeping these barriers "immediately before or after
> > the primitive being strengthened" is a good practice (readability, and
> > all that), if this is what you're suggesting.
> >
> > However, a first auditing of the callsites showed that this practice is
> > in fact not always applied, notably... ;-)
> >
> > kernel/rcu/tree_exp.h:sync_exp_work_done
> > kernel/sched/cpupri.c:cpupri_set
> >
> > So there appear, at least, to be some exceptions/reasons for not always
> > following it? Thoughts?
> >
> > BTW, while auditing these callsites, I've stumbled across the following
> > snippet (from kernel/futex.c):
> >
> > *futex = newval;
> > sys_futex(WAKE, futex);
> > futex_wake(futex);
> > smp_mb(); (B)
> > if (waiters)
> > ...
> >
> > where B is actually (c.f., futex_get_mm()):
> >
> > atomic_inc(...->mm_count);
> > smp_mb__after_atomic();
> >
> > It seems worth mentioning the fact that, AFAICT, this sequence does not
> > necessarily provide ordering when plain accesses are involved: consider,
> > e.g., the following variant of the snippet:
> >
> > A:*x = 1;
> > /*
> > * I've "ignored" the syscall, which should provide
> > * (at least) a compiler barrier...
> > */
> > atomic_inc(u);
> > smp_mb__after_atomic();
> > B:r0 = *y;
> >
> > On x86, AFAICT, the compiler can do this:
> >
> > atomic_inc(u);
> > A:*x = 1;
> > smp_mb__after_atomic();
> > B:r0 = *y;
> >
> > (the implementation of atomic_inc() contains no compiler barrier), then
> > the CPU can "reorder" A and B (smp_mb__after_atomic() being #defined to
> > a compiler barrier).
>
> Are you saying that on x86, atomic_inc() acts as a full memory barrier
> but not as a compiler barrier, and vice versa for
> smp_mb__after_atomic()? Or that neither atomic_inc() nor
> smp_mb__after_atomic() implements a full memory barrier?
>
> Either one seems like a very dangerous situation indeed.
If I am following the macro-name breadcrumb trails correctly, x86's
atomic_inc() does have a compiler barrier. But this is an accident
of implementation -- from what I can see, it is not required to do so.
So smb_mb__after_atomic() is only guaranteed to order the atomic_inc()
before B, not A. To order A before B in the above example, an
smp_mb__before_atomic() is also needed.
But now that I look, LKMM looks to be stating a stronger guarantee:
([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
([M] ; po ; [UL] ; (co | po) ; [LKW] ;
fencerel(After-unlock-lock) ; [M])
Maybe something like this?
([M] ; fencerel(Before-atomic) ; [RMW] ; fencerel(After-atomic) ; [M]) |
([M] ; fencerel(Before-atomic) ; [RMW] |
( [RMW] ; fencerel(After-atomic) ; [M]) |
([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
([M] ; po ; [UL] ; (co | po) ; [LKW] ;
fencerel(After-unlock-lock) ; [M])
Who is the lead maintainer for this stuff, anyway??? ;-)
Thanx, Paul
> Alan
>
> > The mips implementation seems also affected by such "reorderings": I am
> > not familiar with this implementation but, AFAICT, it does not enforce
> > ordering from A to B in the following snippet:
> >
> > A:*x = 1;
> > atomic_inc(u);
> > smp_mb__after_atomic();
> > B:WRITE_ONCE(*y, 1);
> >
> > when CONFIG_WEAK_ORDERING=y, CONFIG_WEAK_REORDERING_BEYOND_LLSC=n.
> >
> > Do these observations make sense to you? Thoughts?
> >
> > Andrea
>
Powered by blists - more mailing lists