[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190418125412.GA10817@andrea>
Date: Thu, 18 Apr 2019 14:54:12 +0200
From: Andrea Parri <andrea.parri@...rulasolutions.com>
To: "Paul E. McKenney" <paulmck@...ux.ibm.com>
Cc: Alan Stern <stern@...land.harvard.edu>,
LKMM Maintainers -- Akira Yokosawa <akiyks@...il.com>,
Boqun Feng <boqun.feng@...il.com>,
Daniel Lustig <dlustig@...dia.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will.deacon@....com>,
Daniel Kroening <kroening@...ox.ac.uk>,
Kernel development list <linux-kernel@...r.kernel.org>
Subject: Re: Adding plain accesses and detecting data races in the LKMM
> Another question is "should the kernel permit smp_mb__{before,after}*()
> anywhere other than immediately before or after the primitive being
> strengthened?"
Mmh, I do think that keeping these barriers "immediately before or after
the primitive being strengthened" is a good practice (readability, and
all that), if this is what you're suggesting.
However, a first auditing of the callsites showed that this practice is
in fact not always applied, notably... ;-)
kernel/rcu/tree_exp.h:sync_exp_work_done
kernel/sched/cpupri.c:cpupri_set
So there appear, at least, to be some exceptions/reasons for not always
following it? Thoughts?
BTW, while auditing these callsites, I've stumbled across the following
snippet (from kernel/futex.c):
*futex = newval;
sys_futex(WAKE, futex);
futex_wake(futex);
smp_mb(); (B)
if (waiters)
...
where B is actually (c.f., futex_get_mm()):
atomic_inc(...->mm_count);
smp_mb__after_atomic();
It seems worth mentioning the fact that, AFAICT, this sequence does not
necessarily provide ordering when plain accesses are involved: consider,
e.g., the following variant of the snippet:
A:*x = 1;
/*
* I've "ignored" the syscall, which should provide
* (at least) a compiler barrier...
*/
atomic_inc(u);
smp_mb__after_atomic();
B:r0 = *y;
On x86, AFAICT, the compiler can do this:
atomic_inc(u);
A:*x = 1;
smp_mb__after_atomic();
B:r0 = *y;
(the implementation of atomic_inc() contains no compiler barrier), then
the CPU can "reorder" A and B (smp_mb__after_atomic() being #defined to
a compiler barrier).
The mips implementation seems also affected by such "reorderings": I am
not familiar with this implementation but, AFAICT, it does not enforce
ordering from A to B in the following snippet:
A:*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
B:WRITE_ONCE(*y, 1);
when CONFIG_WEAK_ORDERING=y, CONFIG_WEAK_REORDERING_BEYOND_LLSC=n.
Do these observations make sense to you? Thoughts?
Andrea
Powered by blists - more mailing lists