[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251208092744.32737-28-kprateek.nayak@amd.com>
Date: Mon, 8 Dec 2025 09:27:14 +0000
From: K Prateek Nayak <kprateek.nayak@....com>
To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Anna-Maria Behnsen <anna-maria@...utronix.de>,
Frederic Weisbecker <frederic@...nel.org>, Thomas Gleixner
<tglx@...utronix.de>
CC: <linux-kernel@...r.kernel.org>, Dietmar Eggemann
<dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin Schneider
<vschneid@...hat.com>, K Prateek Nayak <kprateek.nayak@....com>, "Gautham R.
Shenoy" <gautham.shenoy@....com>, Swapnil Sapkal <swapnil.sapkal@....com>,
Shrikanth Hegde <sshegde@...ux.ibm.com>, Chen Yu <yu.c.chen@...el.com>
Subject: [RESEND RFC PATCH v2 28/29] [EXPERIMENTAL] sched/fair: Add a local counter to rate limit task push
Pushing tasks can fail for multitude of reasons - task affinity, the
unavailability of an idle CPUs by the time balance callback is executed,
etc.
Maintain a CPU local counter in sched_domain to rate limit push attempts
if the failures build up. This counter is reset at the time of periodic
balance to the value in "nr_idle_scan".
Since "nr_idle_scan" is only computed for SIS_UTIL, rate limiting has
been guarded behind the same sched_feat().
Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
---
include/linux/sched/topology.h | 4 ++++
kernel/sched/fair.c | 23 +++++++++++++++++++++--
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 074ee2980cdf..ebe26ce82c1a 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -122,6 +122,10 @@ struct sched_domain {
unsigned int alb_failed;
unsigned int alb_pushed;
+ /* Push load balancing */
+ unsigned long last_nr_push_update;
+ int nr_push_attempt;
+
/* SD_BALANCE_EXEC stats */
unsigned int sbe_count;
unsigned int sbe_balanced;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 34aeb8e58e0b..46d33ab63336 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12356,6 +12356,16 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
rq->max_idle_balance_cost =
max((u64)sysctl_sched_migration_cost, max_cost);
}
+ if (sched_feat(SIS_UTIL)) {
+ sd = rcu_dereference(per_cpu(sd_llc, cpu));
+
+ if (sd && sd->shared &&
+ time_after_eq(jiffies, sd->last_nr_push_update + sd->min_interval)) {
+ sd->nr_push_attempt = READ_ONCE(sd->shared->nr_idle_scan);
+ sd->last_nr_push_update = jiffies;
+ }
+ }
+
rcu_read_unlock();
/*
@@ -13110,8 +13120,6 @@ static inline bool should_push_tasks(struct rq *rq)
struct sched_domain *sd;
int cpu = cpu_of(rq);
- /* TODO: Add a CPU local failure counter. */
-
/* CPU doesn't have any fair task to push. */
if (!has_pushable_tasks(rq))
return false;
@@ -13126,6 +13134,10 @@ static inline bool should_push_tasks(struct rq *rq)
if (!sd)
return false;
+ /* We've failed to push task too many times. */
+ if (sched_feat(SIS_UTIL) && sd->nr_push_attempt <= 0)
+ return false;
+
/*
* We may not be able to find a push target.
* Skip for this tick and depend on the periodic
@@ -13176,6 +13188,13 @@ static bool push_fair_task(struct rq *rq)
return true;
}
+ /*
+ * If the push failed after a full search, decrement the
+ * attempt counter to dicourage further attempts. Periodic
+ * balancer will reset the "nr_push_attempt" after a while.
+ */
+ sd->nr_push_attempt--;
+
return false;
}
--
2.43.0
Powered by blists - more mailing lists