[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240213055554.1802415-17-ankur.a.arora@oracle.com>
Date: Mon, 12 Feb 2024 21:55:40 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org
Cc: tglx@...utronix.de, peterz@...radead.org, torvalds@...ux-foundation.org,
paulmck@...nel.org, akpm@...ux-foundation.org, luto@...nel.org,
bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
willy@...radead.org, mgorman@...e.de, jpoimboe@...nel.org,
mark.rutland@....com, jgross@...e.com, andrew.cooper3@...rix.com,
bristot@...nel.org, mathieu.desnoyers@...icios.com,
geert@...ux-m68k.org, glaubitz@...sik.fu-berlin.de,
anton.ivanov@...bridgegreys.com, mattst88@...il.com,
krypton@...ich-teichert.org, rostedt@...dmis.org,
David.Laight@...LAB.COM, richard@....at, mjguzik@...il.com,
jon.grimm@....com, bharata@....com, raghavendra.kt@....com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
Ankur Arora <ankur.a.arora@...cle.com>
Subject: [PATCH 16/30] rcu: force context-switch for PREEMPT_RCU=n, PREEMPT_COUNT=y
With (PREEMPT_RCU=n, PREEMPT_COUNT=y), rcu_flavor_sched_clock_irq()
registers urgently needed quiescent states when preempt_count() is
available and no task or softirq is in a non-preemptible section.
This, however, does nothing for long running loops where preemption
is only temporarily enabled, since the tick is unlikely to neatly fall
in the preemptible() section.
Handle that by forcing a context-switch when we require a quiescent
state urgently but are holding a preempt_count().
Cc: Paul E. McKenney <paulmck@...nel.org>
Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
---
kernel/rcu/tree.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index d6ac2b703a6d..5f61e7e0f16c 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2248,8 +2248,17 @@ void rcu_sched_clock_irq(int user)
raw_cpu_inc(rcu_data.ticks_this_gp);
/* The load-acquire pairs with the store-release setting to true. */
if (smp_load_acquire(this_cpu_ptr(&rcu_data.rcu_urgent_qs))) {
- /* Idle and userspace execution already are quiescent states. */
- if (!rcu_is_cpu_rrupt_from_idle() && !user) {
+ /*
+ * Idle and userspace execution already are quiescent states.
+ * If, however, we came here from a nested interrupt in the
+ * kernel, or if we have PREEMPT_RCU=n but are holding a
+ * preempt_count() (say, with CONFIG_PREEMPT_AUTO=y), then
+ * force a context switch.
+ */
+ if ((!rcu_is_cpu_rrupt_from_idle() && !user) ||
+ ((!IS_ENABLED(CONFIG_PREEMPT_RCU) &&
+ IS_ENABLED(CONFIG_PREEMPT_COUNT)) &&
+ (preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)))) {
set_tsk_need_resched(current, NR_now);
set_preempt_need_resched();
}
--
2.31.1
Powered by blists - more mailing lists