lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240528003521.979836-19-ankur.a.arora@oracle.com>
Date: Mon, 27 May 2024 17:35:04 -0700
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org
Cc: tglx@...utronix.de, peterz@...radead.org, torvalds@...ux-foundation.org,
        paulmck@...nel.org, rostedt@...dmis.org, mark.rutland@....com,
        juri.lelli@...hat.com, joel@...lfernandes.org, raghavendra.kt@....com,
        sshegde@...ux.ibm.com, boris.ostrovsky@...cle.com,
        konrad.wilk@...cle.com, Ankur Arora <ankur.a.arora@...cle.com>
Subject: [PATCH v2 18/35] rcu: force context-switch for PREEMPT_RCU=n, PREEMPT_COUNT=y

With (PREEMPT_RCU=n, PREEMPT_COUNT=y), rcu_flavor_sched_clock_irq()
registers urgently needed quiescent states when preempt_count() is
available and no task or softirq is in a non-preemptible section.

This, however, does nothing for long running loops where preemption
is only temporarily enabled, since the tick is unlikely to neatly fall
in the preemptible() section.

Handle that by forcing a context-switch when we require a quiescent
state urgently but are holding a preempt_count().

Cc: Paul E. McKenney <paulmck@...nel.org>
Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
---
 kernel/rcu/tree.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index d9642dd06c25..3a0e1d0b939c 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2286,8 +2286,17 @@ void rcu_sched_clock_irq(int user)
 	raw_cpu_inc(rcu_data.ticks_this_gp);
 	/* The load-acquire pairs with the store-release setting to true. */
 	if (smp_load_acquire(this_cpu_ptr(&rcu_data.rcu_urgent_qs))) {
-		/* Idle and userspace execution already are quiescent states. */
-		if (!rcu_is_cpu_rrupt_from_idle() && !user) {
+		/*
+		 * Idle and userspace execution already are quiescent states.
+		 * If, however, we came here from a nested interrupt in the
+		 * kernel, or if we have PREEMPT_RCU=n but are holding a
+		 * preempt_count() (say, with CONFIG_PREEMPT_AUTO=y), then
+		 * force a context switch.
+		 */
+		if ((!rcu_is_cpu_rrupt_from_idle() && !user) ||
+		     ((!IS_ENABLED(CONFIG_PREEMPT_RCU) &&
+		       IS_ENABLED(CONFIG_PREEMPT_COUNT)) &&
+		     (preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)))) {
 			set_tsk_need_resched(current);
 			set_preempt_need_resched();
 		}
-- 
2.31.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ