[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210610165710.GT4397@paulmck-ThinkPad-P17-Gen-1>
Date: Thu, 10 Jun 2021 09:57:10 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Neeraj Upadhyay <neeraju@...eaurora.org>,
Boqun Feng <boqun.feng@...il.com>,
Uladzislau Rezki <urezki@...il.com>,
Joel Fernandes <joel@...lfernandes.org>
Subject: Re: [PATCH] rcu/doc: Add a quick quiz to explain further why we need
smp_mb__after_unlock_lock()
On Thu, Jun 10, 2021 at 05:50:29PM +0200, Frederic Weisbecker wrote:
> Add some missing critical pieces of explanation to understand the need
> for full memory barriers throughout the whole grace period state machine,
> thanks to Paul's explanations.
>
> Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
> Cc: Neeraj Upadhyay <neeraju@...eaurora.org>
> Cc: Joel Fernandes <joel@...lfernandes.org>
> Cc: Uladzislau Rezki <urezki@...il.com>
> Cc: Boqun Feng <boqun.feng@...il.com>
Nice!!! And not bad wording either, though I still could not resist the
urge to wordsmith further. Plus I combined your two examples, in order to
provide a trivial example use of the polling interfaces, if nothing else.
Please let me know if I messed anything up.
Thanx, Paul
------------------------------------------------------------------------
commit f21b8fbdf9a59553da825265e92cedb639b4ba3c
Author: Frederic Weisbecker <frederic@...nel.org>
Date: Thu Jun 10 17:50:29 2021 +0200
rcu/doc: Add a quick quiz to explain further why we need smp_mb__after_unlock_lock()
Add some missing critical pieces of explanation to understand the need
for full memory barriers throughout the whole grace period state machine,
thanks to Paul's explanations.
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
Cc: Neeraj Upadhyay <neeraju@...eaurora.org>
Cc: Joel Fernandes <joel@...lfernandes.org>
Cc: Uladzislau Rezki <urezki@...il.com>
Cc: Boqun Feng <boqun.feng@...il.com>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
diff --git a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst
index 11cdab037bff..3cd5cb4d86e5 100644
--- a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst
+++ b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst
@@ -112,6 +112,35 @@ on PowerPC.
The ``smp_mb__after_unlock_lock()`` invocations prevent this
``WARN_ON()`` from triggering.
++-----------------------------------------------------------------------+
+| **Quick Quiz**: |
++-----------------------------------------------------------------------+
+| But the whole chain of rcu_node-structure locking guarantees that |
+| readers see all pre-grace-period accesses from the updater and |
+| also guarantees that the updater to see all post-grace-period |
+| accesses from the readers. So why do we need all of those calls |
+| to smp_mb__after_unlock_lock()? |
++-----------------------------------------------------------------------+
+| **Answer**: |
++-----------------------------------------------------------------------+
+| Because we must provide ordering for RCU's polling grace-period |
+| primitives, for example, get_state_synchronize_rcu() and |
+| poll_state_synchronize_rcu(). For example: |
+| |
+| CPU 0 CPU 1 |
+| ---- ---- |
+| WRITE_ONCE(X, 1) WRITE_ONCE(Y, 1) |
+| g = get_state_synchronize_rcu() smp_mb() |
+| while (!poll_state_synchronize_rcu(g)) r1 = READ_ONCE(X) |
+| continue; |
+| r0 = READ_ONCE(Y) |
+| |
+| RCU guarantees that that the outcome r0 == 0 && r1 == 0 will not |
+| happen, even if CPU 1 is in an RCU extended quiescent state (idle |
+| or offline) and thus won't interact directly with the RCU core |
+| processing at all. |
++-----------------------------------------------------------------------+
+
This approach must be extended to include idle CPUs, which need
RCU's grace-period memory ordering guarantee to extend to any
RCU read-side critical sections preceding and following the current
Powered by blists - more mailing lists