[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230105010952.1774272-4-paulmck@kernel.org>
Date: Wed, 4 Jan 2023 17:09:52 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
kernel-team@...a.com, mingo@...nel.org
Cc: stern@...land.harvard.edu, parri.andrea@...il.com, will@...nel.org,
peterz@...radead.org, boqun.feng@...il.com, npiggin@...il.com,
dhowells@...hat.com, j.alglave@....ac.uk, luc.maranget@...ia.fr,
akiyks@...il.com, Jonas Oberhauser <jonas.oberhauser@...wei.com>,
Viktor Vafeiadis <viktor@...-sws.org>,
"Paul E . McKenney" <paulmck@...nel.org>
Subject: [PATCH memory-model 4/4] tools: memory-model: Make plain accesses carry dependencies
From: Jonas Oberhauser <jonas.oberhauser@...wei.com>
As reported by Viktor, plain accesses in LKMM are weaker than
accesses to registers: the latter carry dependencies but the former
do not. This is exemplified in the following snippet:
int r = READ_ONCE(*x);
WRITE_ONCE(*y, r);
Here a data dependency links the READ_ONCE() to the WRITE_ONCE(),
preserving their order, because the model treats r as a register.
If r is turned into a memory location accessed by plain accesses,
however, the link is broken and the order between READ_ONCE() and
WRITE_ONCE() is no longer preserved.
This is too conservative, since any optimizations on plain
accesses that might break dependencies are also possible on
registers; it also contradicts the intuitive notion of "dependency"
as the data stored by the WRITE_ONCE() does depend on the data read
by the READ_ONCE(), independently of whether r is a register or a
memory location.
This is resolved by redefining all dependencies to include
dependencies carried by memory accesses; a dependency is said to be
carried by memory accesses (in the model: carry-dep) from one load
to another load if the initial load is followed by an arbitrarily
long sequence alternating between stores and loads of the same
thread, where the data of each store depends on the previous load,
and is read by the next load.
Any dependency linking the final load in the sequence to another
access also links the initial load in the sequence to that access.
More deep details can be found in this LKML discussion:
https://lore.kernel.org/lkml/d86295788ad14a02874ab030ddb8a6f8@huawei.com/
Reported-by: Viktor Vafeiadis <viktor@...-sws.org>
Signed-off-by: Jonas Oberhauser <jonas.oberhauser@...wei.com>
Reviewed-by: Alan Stern <stern@...land.harvard.edu>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
---
.../Documentation/explanation.txt | 9 +++++-
tools/memory-model/linux-kernel.bell | 6 ++++
.../litmus-tests/dep+plain.litmus | 31 +++++++++++++++++++
3 files changed, 45 insertions(+), 1 deletion(-)
create mode 100644 tools/memory-model/litmus-tests/dep+plain.litmus
diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt
index e901b47236c37..8e70852384709 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -2575,7 +2575,7 @@ smp_store_release() -- which is basically how the Linux kernel treats
them.
Although we said that plain accesses are not linked by the ppo
-relation, they do contribute to it indirectly. Namely, when there is
+relation, they do contribute to it indirectly. Firstly, when there is
an address dependency from a marked load R to a plain store W,
followed by smp_wmb() and then a marked store W', the LKMM creates a
ppo link from R to W'. The reasoning behind this is perhaps a little
@@ -2584,6 +2584,13 @@ for this source code in which W' could execute before R. Just as with
pre-bounding by address dependencies, it is possible for the compiler
to undermine this relation if sufficient care is not taken.
+Secondly, plain accesses can carry dependencies: If a data dependency
+links a marked load R to a store W, and the store is read by a load R'
+from the same thread, then the data loaded by R' depends on the data
+loaded originally by R. Thus, if R' is linked to any access X by a
+dependency, R is also linked to access X by the same dependency, even
+if W' or R' (or both!) are plain.
+
There are a few oddball fences which need special treatment:
smp_mb__before_atomic(), smp_mb__after_atomic(), and
smp_mb__after_spinlock(). The LKMM uses fence events with special
diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linux-kernel.bell
index 5be86b1025e8d..70a9073dec3e5 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -82,3 +82,9 @@ flag ~empty different-values(srcu-rscs) as srcu-bad-nesting
let Marked = (~M) | IW | Once | Release | Acquire | domain(rmw) | range(rmw) |
LKR | LKW | UL | LF | RL | RU
let Plain = M \ Marked
+
+(* Redefine dependencies to include those carried through plain accesses *)
+let carry-dep = (data ; rfi)*
+let addr = carry-dep ; addr
+let ctrl = carry-dep ; ctrl
+let data = carry-dep ; data
diff --git a/tools/memory-model/litmus-tests/dep+plain.litmus b/tools/memory-model/litmus-tests/dep+plain.litmus
new file mode 100644
index 0000000000000..ebf84daa9a590
--- /dev/null
+++ b/tools/memory-model/litmus-tests/dep+plain.litmus
@@ -0,0 +1,31 @@
+C dep+plain
+
+(*
+ * Result: Never
+ *
+ * This litmus test demonstrates that in LKMM, plain accesses
+ * carry dependencies much like accesses to registers:
+ * The data stored to *z1 and *z2 by P0() originates from P0()'s
+ * READ_ONCE(), and therefore using that data to compute the
+ * conditional of P0()'s if-statement creates a control dependency
+ * from that READ_ONCE() to P0()'s WRITE_ONCE().
+ *)
+
+{}
+
+P0(int *x, int *y, int *z1, int *z2)
+{
+ int a = READ_ONCE(*x);
+ *z1 = a;
+ *z2 = *z1;
+ if (*z2 == 1)
+ WRITE_ONCE(*y, 1);
+}
+
+P1(int *x, int *y)
+{
+ int r = smp_load_acquire(y);
+ smp_store_release(x, r);
+}
+
+exists (x=1 /\ y=1)
--
2.31.1.189.g2e36527f23
Powered by blists - more mailing lists