[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250316102916.10614-2-kprateek.nayak@amd.com>
Date: Sun, 16 Mar 2025 10:29:11 +0000
From: K Prateek Nayak <kprateek.nayak@....com>
To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Chen Yu <yu.c.chen@...el.com>,
<linux-kernel@...r.kernel.org>
CC: Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt
<rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman
<mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>, David Vernet
<void@...ifault.com>, "Gautham R. Shenoy" <gautham.shenoy@....com>, "Swapnil
Sapkal" <swapnil.sapkal@....com>, Shrikanth Hegde <sshegde@...ux.ibm.com>, "K
Prateek Nayak" <kprateek.nayak@....com>
Subject: [RFC PATCH 10/08] sched/fair: Compute nr_{numa,preferred}_running for non-NUMA domains
Migrations within a NUMA domain will not change
nr_{numa,preferred}_running stats. Compute it for non-NUMA groups for it
to be propagated and reused for the first NUMA domain when it exists.
While at it, also clear sd_stats before aggregation.
Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
---
kernel/sched/fair.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 212bee3e9f35..d09f900a3107 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10398,10 +10398,8 @@ static inline void aggregate_sd_stats(struct lb_env *env,
sd_stats->overutilized |= sg_stats->overutilized;
#ifdef CONFIG_NUMA_BALANCING
- if (env->sd->flags & SD_NUMA) {
- sd_stats->nr_numa_running += sg_stats->nr_numa_running;
- sd_stats->nr_preferred_running += sg_stats->nr_preferred_running;
- }
+ sd_stats->nr_numa_running += sg_stats->nr_numa_running;
+ sd_stats->nr_preferred_running += sg_stats->nr_preferred_running;
#endif
}
@@ -10464,11 +10462,8 @@ static inline void update_sg_lb_stats(struct lb_env *env,
sgs->overloaded = 1;
#ifdef CONFIG_NUMA_BALANCING
- /* Only fbq_classify_group() uses this to classify NUMA groups */
- if (sd_flags & SD_NUMA) {
- sgs->nr_numa_running += rq->nr_numa_running;
- sgs->nr_preferred_running += rq->nr_preferred_running;
- }
+ sgs->nr_numa_running += rq->nr_numa_running;
+ sgs->nr_preferred_running += rq->nr_preferred_running;
#endif
if (local_group)
continue;
@@ -11112,8 +11107,10 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
* load balancing there, aggregate the statistics at current domain
* to be retrieved when load balancing at parent.
*/
- if (env->sd->parent && can_retrieve_stats(env->sd->parent, env->idle))
+ if (env->sd->parent && can_retrieve_stats(env->sd->parent, env->idle)) {
+ memset(&sd_stats, 0, sizeof(sd_stats));
should_prop = true;
+ }
do {
struct sg_lb_stats *sgs = &tmp_sgs;
--
2.43.0
Powered by blists - more mailing lists