[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200624201226.21197-3-paulmck@kernel.org>
Date: Wed, 24 Jun 2020 13:12:12 -0700
From: paulmck@...nel.org
To: rcu@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, kernel-team@...com, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Uladzislau Rezki <urezki@...il.com>,
"Paul E . McKenney" <paulmck@...nel.org>
Subject: [PATCH tip/core/rcu 03/17] rcu/tree: Skip entry into the page allocator for PREEMPT_RT
From: "Joel Fernandes (Google)" <joel@...lfernandes.org>
To keep the kfree_rcu() code working in purely atomic sections on RT,
such as non-threaded IRQ handlers and raw spinlock sections, avoid
calling into the page allocator which uses sleeping locks on RT.
In fact, even if the caller is preemptible, the kfree_rcu() code is
not, as the krcp->lock is a raw spinlock.
Calling into the page allocator is optional and avoiding it should be
Ok, especially with the page pre-allocation support in future patches.
Such pre-allocation would further avoid the a need for a dynamically
allocated page in the first place.
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Reviewed-by: Uladzislau Rezki <urezki@...il.com>
Co-developed-by: Uladzislau Rezki <urezki@...il.com>
Signed-off-by: Uladzislau Rezki <urezki@...il.com>
Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
---
kernel/rcu/tree.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 64592b4..dbdd509 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3184,6 +3184,18 @@ kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp,
if (!bnode) {
WARN_ON_ONCE(sizeof(struct kfree_rcu_bulk_data) > PAGE_SIZE);
+ /*
+ * To keep this path working on raw non-preemptible
+ * sections, prevent the optional entry into the
+ * allocator as it uses sleeping locks. In fact, even
+ * if the caller of kfree_rcu() is preemptible, this
+ * path still is not, as krcp->lock is a raw spinlock.
+ * With additional page pre-allocation in the works,
+ * hitting this return is going to be much less likely.
+ */
+ if (IS_ENABLED(CONFIG_PREEMPT_RT))
+ return false;
+
bnode = (struct kfree_rcu_bulk_data *)
__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
}
--
2.9.5
Powered by blists - more mailing lists