[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201015002301.101830-7-joel@joelfernandes.org>
Date: Wed, 14 Oct 2020 20:23:01 -0400
From: "Joel Fernandes (Google)" <joel@...lfernandes.org>
To: linux-kernel@...r.kernel.org
Cc: "Joel Fernandes (Google)" <joel@...lfernandes.org>,
Ingo Molnar <mingo@...hat.com>,
Josh Triplett <josh@...htriplett.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Marco Elver <elver@...gle.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
"Paul E. McKenney" <paulmck@...nel.org>, rcu@...r.kernel.org,
Steven Rostedt <rostedt@...dmis.org>,
"Uladzislau Rezki (Sony)" <urezki@...il.com>, fweisbec@...il.com,
neeraj.iitr10@...il.com
Subject: [PATCH v7 6/6] rcu/segcblist: Add additional comments to explain smp_mb()
Memory barriers are needed when updating the full length of the
segcblist, however it is not fully clearly why one is needed before and
after. This patch therefore adds additional comments to the function
header to explain it.
Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
---
kernel/rcu/rcu_segcblist.c | 38 ++++++++++++++++++++++++++++++++++----
1 file changed, 34 insertions(+), 4 deletions(-)
diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
index 271d5d9d7f60..25ffd07f9951 100644
--- a/kernel/rcu/rcu_segcblist.c
+++ b/kernel/rcu/rcu_segcblist.c
@@ -147,17 +147,47 @@ static void rcu_segcblist_inc_seglen(struct rcu_segcblist *rsclp, int seg)
* field to disagree with the actual number of callbacks on the structure.
* This increase is fully ordered with respect to the callers accesses
* both before and after.
+ *
+ * About memory barriers:
+ * There is a situation where rcu_barrier() locklessly samples the full
+ * length of the segmented cblist before deciding what to do. That can
+ * race with another path that calls this function. rcu_barrier() should
+ * not wrongly assume there are no callbacks, so any transitions from 1->0
+ * and 0->1 have to be carefully ordered with respect to list modifications.
+ *
+ * Memory barrier is needed before adding to length, for the case where
+ * v is negative which does not happen in current code, but used
+ * to happen. Keep the memory barrier for robustness reasons. When/If the
+ * length transitions from 1 -> 0, the write to 0 has to be ordered *after*
+ * the memory accesses of the CBs that were dequeued and the segcblist
+ * modifications:
+ * P0 (what P1 sees) P1
+ * set len = 0
+ * rcu_barrier sees len as 0
+ * dequeue from list
+ * rcu_barrier does nothing.
+ *
+ * Memory barrier is needed after adding to length for the case
+ * where length transitions from 0 -> 1. This is because rcu_barrier()
+ * should never miss an update to the length. So the update to length
+ * has to be seen *before* any modifications to the segmented list. Otherwise a
+ * race can happen.
+ * P0 (what P1 sees) P1
+ * queue to list
+ * rcu_barrier sees len as 0
+ * set len = 1.
+ * rcu_barrier does nothing.
*/
void rcu_segcblist_add_len(struct rcu_segcblist *rsclp, long v)
{
#ifdef CONFIG_RCU_NOCB_CPU
- smp_mb__before_atomic(); /* Up to the caller! */
+ smp_mb__before_atomic(); /* Read function's comments */
atomic_long_add(v, &rsclp->len);
- smp_mb__after_atomic(); /* Up to the caller! */
+ smp_mb__after_atomic(); /* Read function's comments */
#else
- smp_mb(); /* Up to the caller! */
+ smp_mb(); /* Read function's comments */
WRITE_ONCE(rsclp->len, rsclp->len + v);
- smp_mb(); /* Up to the caller! */
+ smp_mb(); /* Read function's comments */
#endif
}
--
2.28.0.1011.ga647a8990f-goog
Powered by blists - more mailing lists