[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250220093257.9380-21-kprateek.nayak@amd.com>
Date: Thu, 20 Feb 2025 09:32:55 +0000
From: K Prateek Nayak <kprateek.nayak@....com>
To: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Valentin Schneider <vschneid@...hat.com>, "Ben
Segall" <bsegall@...gle.com>, Thomas Gleixner <tglx@...utronix.de>, "Andy
Lutomirski" <luto@...nel.org>, <linux-kernel@...r.kernel.org>
CC: Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt
<rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>, "Sebastian Andrzej
Siewior" <bigeasy@...utronix.de>, Clark Williams <clrkwllms@...nel.org>,
<linux-rt-devel@...ts.linux.dev>, Tejun Heo <tj@...nel.org>, "Frederic
Weisbecker" <frederic@...nel.org>, Barret Rhoden <brho@...gle.com>, "Petr
Mladek" <pmladek@...e.com>, Josh Don <joshdon@...gle.com>, Qais Yousef
<qyousef@...alina.io>, "Paul E. McKenney" <paulmck@...nel.org>, David Vernet
<dvernet@...a.com>, K Prateek Nayak <kprateek.nayak@....com>, "Gautham R.
Shenoy" <gautham.shenoy@....com>, Swapnil Sapkal <swapnil.sapkal@....com>
Subject: [RFC PATCH 20/22] sched/fair: Implement determine_throttle_state() for partial throttle
With the plumbing for partial throttle in place, implement
determine_throttle_state() for partial throttle when it finds cfs_rq
with kernel mode preempted entities on it. Also remove the early return
in unthrottle_throttled()
"Let it rip"
Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
---
kernel/sched/fair.c | 31 ++++++++++++++++++++++++++-----
1 file changed, 26 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 55e53db8da45..39c7e8f548ca 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5894,13 +5894,37 @@ static inline int throttled_lb_pair(struct task_group *tg,
throttled_hierarchy(dest_cfs_rq);
}
+static __always_inline int se_in_kernel(struct sched_entity *se);
+static inline int ignore_task_kcs_stats(struct task_struct *p);
+
static enum throttle_state
determine_throttle_state(struct cfs_rq *gcfs_rq, struct sched_entity *se)
{
+ struct sched_entity *curr = gcfs_rq->curr;
+
+ if (se_in_kernel(se))
+ return CFS_THROTTLED_PARTIAL;
+
/*
- * TODO: Implement rest once plumbing for
- * CFS_THROTTLED_PARTIAL is done.
+ * Check if current task's hierarchy needs throttle deferral.
+ * For save / restore operations, cfs_rq->curr could still be
+ * set but the task has already been dequeued by the time
+ * put_prev_task() is called. Only check if gcfs_rq->curr is
+ * set to check the current task's indicator. If the hierarchy
+ * leads to a queued task executing in kernel or is having its
+ * stats ignored, request a partial throttle.
+ *
+ * set_nex_task_fair() will request resched if throttle status
+ * changes once stats are reconsidred.
*/
+ if (curr) {
+ struct task_struct *p = rq_of(gcfs_rq)->curr;
+
+ if (task_on_rq_queued(p) &&
+ (ignore_task_kcs_stats(p) || se_in_kernel(&p->se)))
+ return CFS_THROTTLED_PARTIAL;
+ }
+
return CFS_THROTTLED;
}
@@ -7181,9 +7205,6 @@ static void unthrottle_throttled(struct cfs_rq *gcfs_rq, bool in_kernel)
struct rq *rq = rq_of(gcfs_rq);
struct sched_entity *se = gcfs_rq->tg->se[cpu_of(rq)];
- /* TODO: Remove this early return once plumbing is done */
- return;
-
/*
* Demoting a cfs_rq to partial throttle will trigger a
* rq_clock update. Skip all the updates and use the
--
2.43.0
Powered by blists - more mailing lists