[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240528003521.979836-25-ankur.a.arora@oracle.com>
Date: Mon, 27 May 2024 17:35:10 -0700
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org
Cc: tglx@...utronix.de, peterz@...radead.org, torvalds@...ux-foundation.org,
paulmck@...nel.org, rostedt@...dmis.org, mark.rutland@....com,
juri.lelli@...hat.com, joel@...lfernandes.org, raghavendra.kt@....com,
sshegde@...ux.ibm.com, boris.ostrovsky@...cle.com,
konrad.wilk@...cle.com, Ankur Arora <ankur.a.arora@...cle.com>,
Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: [PATCH v2 24/35] sched: schedule eagerly in resched_cpu()
resched_cpu() is used as an RCU hammer of last resort. Force
rescheduling eagerly with tif_resched(RESCHED_NOW).
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@...hat.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Paul E. McKenney <paulmck@...nel.org>
Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
---
kernel/sched/core.c | 14 +++++++++++---
kernel/sched/sched.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1b930b84eb59..e838328d93d1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1035,8 +1035,9 @@ void wake_up_q(struct wake_q_head *head)
* For preemption models other than PREEMPT_AUTO: always schedule
* eagerly.
*
- * For PREEMPT_AUTO: allow everything else to finish its time quanta, and
- * mark for rescheduling at the next exit to user.
+ * For PREEMPT_AUTO: schedule idle threads eagerly, allow everything else
+ * to finish its time quanta, and mark for rescheduling at the next exit
+ * to user.
*/
static resched_t resched_opt_translate(struct task_struct *curr,
enum resched_opt opt)
@@ -1044,6 +1045,9 @@ static resched_t resched_opt_translate(struct task_struct *curr,
if (!IS_ENABLED(CONFIG_PREEMPT_AUTO))
return RESCHED_NOW;
+ if (opt == RESCHED_FORCE)
+ return RESCHED_NOW;
+
if (is_idle_task(curr))
return RESCHED_NOW;
@@ -1099,7 +1103,11 @@ void resched_cpu(int cpu)
raw_spin_rq_lock_irqsave(rq, flags);
if (cpu_online(cpu) || cpu == smp_processor_id())
- resched_curr(rq);
+ /*
+ * resched_cpu() is typically used as an RCU hammer.
+ * Mark for imminent resched.
+ */
+ __resched_curr(rq, RESCHED_FORCE);
raw_spin_rq_unlock_irqrestore(rq, flags);
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 7013bd054a2f..e5e4747fbef2 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2466,6 +2466,7 @@ extern void reweight_task(struct task_struct *p, int prio);
enum resched_opt {
RESCHED_DEFAULT,
+ RESCHED_FORCE,
};
extern void __resched_curr(struct rq *rq, enum resched_opt opt);
--
2.31.1
Powered by blists - more mailing lists