[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241009125127.18902-10-neeraj.upadhyay@kernel.org>
Date: Wed, 9 Oct 2024 18:21:26 +0530
From: neeraj.upadhyay@...nel.org
To: rcu@...r.kernel.org
Cc: linux-kernel@...r.kernel.org,
paulmck@...nel.org,
joel@...lfernandes.org,
frederic@...nel.org,
boqun.feng@...il.com,
urezki@...il.com,
rostedt@...dmis.org,
mathieu.desnoyers@...icios.com,
jiangshanlai@...il.com,
qiang.zhang1211@...il.com,
peterz@...radead.org,
neeraj.upadhyay@....com,
Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
Subject: [PATCH v2 09/10] context_tracking: Invoke RCU-tasks enter/exit for NMI context
From: Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
rcu_task_enter() and rcu_task_exit() are not called on NMI
entry and exit. So, Tasks-RCU-Rude grace period wait is required to
ensure that NMI handlers have entered/exited into Tasks-RCU eqs.
For architectures which do not require Tasks-RCU-Rude (as the code
sections where RCU is not watching are marked as noinstr), when
those architectures switch to not using Tasks-RCU-Rude, NMI handlers
task exit to eqs will need to be handled correctly for Tasks-RCU holdout
tasks running on nohz_full CPUs. As it is safe to call these two
functions from NMI context, remove the in_nmi() check. This ensures
that RCU-tasks entry/exit is marked correctly for NMI handlers.
With this check removed, all callers of ct_kernel_exit_state() and
ct_kernel_enter_state() now also call rcu_task_exit() and
rcu_task_enter() respectively. So, fold rcu_task_exit() and
rcu_task_entry() calls into ct_kernel_exit_state() and
ct_kernel_enter_state().
Reported-by: Frederic Weisbecker <frederic@...nel.org>
Suggested-by: Frederic Weisbecker <frederic@...nel.org>
Suggested-by: "Paul E. McKenney" <paulmck@...nel.org>
Reviewed-by: Paul E. McKenney <paulmck@...nel.org>
Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
---
kernel/context_tracking.c | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index 938c48952d26..85ced563af23 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -91,6 +91,7 @@ static noinstr void ct_kernel_exit_state(int offset)
seq = ct_state_inc(offset);
// RCU is no longer watching. Better be in extended quiescent state!
WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & CT_RCU_WATCHING));
+ rcu_task_exit();
}
/*
@@ -102,6 +103,8 @@ static noinstr void ct_kernel_enter_state(int offset)
{
int seq;
+ rcu_task_enter();
+
/*
* CPUs seeing atomic_add_return() must see prior idle sojourns,
* and we also must force ordering with the next RCU read-side
@@ -149,7 +152,6 @@ static void noinstr ct_kernel_exit(bool user, int offset)
// RCU is watching here ...
ct_kernel_exit_state(offset);
// ... but is no longer watching here.
- rcu_task_exit();
}
/*
@@ -173,7 +175,6 @@ static void noinstr ct_kernel_enter(bool user, int offset)
ct->nesting++;
return;
}
- rcu_task_enter();
// RCU is not watching here ...
ct_kernel_enter_state(offset);
// ... but is watching here.
@@ -238,9 +239,6 @@ void noinstr ct_nmi_exit(void)
// RCU is watching here ...
ct_kernel_exit_state(CT_RCU_WATCHING);
// ... but is no longer watching here.
-
- if (!in_nmi())
- rcu_task_exit();
}
/**
@@ -273,9 +271,6 @@ void noinstr ct_nmi_enter(void)
*/
if (!rcu_is_watching_curr_cpu()) {
- if (!in_nmi())
- rcu_task_enter();
-
// RCU is not watching here ...
ct_kernel_enter_state(CT_RCU_WATCHING);
// ... but is watching here.
--
2.40.1
Powered by blists - more mailing lists