lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200921012152.2831904-4-joel@joelfernandes.org>
Date:   Sun, 20 Sep 2020 21:21:50 -0400
From:   "Joel Fernandes (Google)" <joel@...lfernandes.org>
To:     linux-kernel@...r.kernel.org
Cc:     "Joel Fernandes (Google)" <joel@...lfernandes.org>,
        Ingo Molnar <mingo@...hat.com>,
        Josh Triplett <josh@...htriplett.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Marco Elver <elver@...gle.com>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        neeraj.iitr10@...il.com, "Paul E. McKenney" <paulmck@...nel.org>,
        rcu@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
        "Uladzislau Rezki (Sony)" <urezki@...il.com>
Subject: [RFC v5 3/5] rcu: Fix rcu_barrier() breakage from earlier patch

This patch is split up because I'm not sure about the patch, and also because
it does not fix the issue I am seeing with TREE04. :-(. Though it does fix the
theoretical issue I was considering, with rcu_barrier.

The previous patch breaks rcu_barrier (in theory at least). This is because
rcu_barrier() may skip queuing a callback and return too early. Fix it by storing
state to indicate that callbacks are being invoked and the callback list should
not appear as non-empty. This is a terrible hack, however it still does not fix
TREE04.

Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
---
 include/linux/rcu_segcblist.h |  1 +
 kernel/rcu/rcu_segcblist.h    | 23 +++++++++++++++++++++--
 kernel/rcu/tree.c             |  4 ++++
 3 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/include/linux/rcu_segcblist.h b/include/linux/rcu_segcblist.h
index d462ae5e340a..319a565f6ecb 100644
--- a/include/linux/rcu_segcblist.h
+++ b/include/linux/rcu_segcblist.h
@@ -76,6 +76,7 @@ struct rcu_segcblist {
 #endif
 	u8 enabled;
 	u8 offloaded;
+	u8 invoking;
 };
 
 #define RCU_SEGCBLIST_INITIALIZER(n) \
diff --git a/kernel/rcu/rcu_segcblist.h b/kernel/rcu/rcu_segcblist.h
index 7f4e02bbb806..78949e125364 100644
--- a/kernel/rcu/rcu_segcblist.h
+++ b/kernel/rcu/rcu_segcblist.h
@@ -40,14 +40,33 @@ static inline bool rcu_segcblist_empty(struct rcu_segcblist *rsclp)
 	return !READ_ONCE(rsclp->head);
 }
 
+static inline void rcu_segcblist_set_invoking(struct rcu_segcblist *rsclp)
+{
+	WRITE_ONCE(rsclp->invoking, 1);
+}
+
+static inline void rcu_segcblist_reset_invoking(struct rcu_segcblist *rsclp)
+{
+	WRITE_ONCE(rsclp->invoking, 0);
+}
+
 /* Return number of callbacks in segmented callback list. */
 static inline long rcu_segcblist_n_cbs(struct rcu_segcblist *rsclp)
 {
+	long ret;
 #ifdef CONFIG_RCU_NOCB_CPU
-	return atomic_long_read(&rsclp->len);
+	ret = atomic_long_read(&rsclp->len);
 #else
-	return READ_ONCE(rsclp->len);
+	ret = READ_ONCE(rsclp->len);
 #endif
+
+	/*
+	 * An invoking list should not appear empty. This is required
+	 * by rcu_barrier().
+	 */
+	if (ret)
+		return ret;
+	return READ_ONCE(rsclp->invoking) ? 1 : 0;
 }
 
 /*
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index ab4d4e9ff549..23fb6d7b6d4a 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2461,6 +2461,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
 	}
 	trace_rcu_batch_start(rcu_state.name,
 			      rcu_segcblist_n_cbs(&rdp->cblist), bl);
+	rcu_segcblist_set_invoking(&rdp->cblist);
 	rcu_segcblist_extract_done_cbs(&rdp->cblist, &rcl);
 	if (offloaded)
 		rdp->qlen_last_fqs_check = rcu_segcblist_n_cbs(&rdp->cblist);
@@ -2517,6 +2518,9 @@ static void rcu_do_batch(struct rcu_data *rdp)
 	/* Update counts and requeue any remaining callbacks. */
 	rcu_segcblist_insert_done_cbs(&rdp->cblist, &rcl);
 
+	smp_mb();
+	rcu_segcblist_reset_invoking(&rdp->cblist);
+
 	/* Reinstate batch limit if we have worked down the excess. */
 	count = rcu_segcblist_n_cbs(&rdp->cblist);
 	if (rdp->blimit >= DEFAULT_MAX_RCU_BLIMIT && count <= qlowmark)
-- 
2.28.0.681.g6f77f65b4e-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ