[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250627-rneri-fix-cas-clusters-v1-1-121ffb50bbc7@linux.intel.com>
Date: Fri, 27 Jun 2025 14:45:27 -0700
From: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
Tim C Chen <tim.c.chen@...ux.intel.com>, Barry Song <baohua@...nel.org>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>, Len Brown <lenb@...nel.org>,
ricardo.neri@...el.com, linux-kernel@...r.kernel.org,
Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
Subject: [PATCH 1/4] sched/fair: Always skip fully_busy higher-capacity
groups for load balance
update_sd_pick_busiest() is supposed to avoid picking as busiest a
candidate scheduling group with no more than one task if its per-CPU
capacity is greater than that of the destination CPU.
update_sd_pick_busiest() selects as busiest a group if its type is greater
than has_spare (the type of the busiest group is initialized as has_spare).
As a result, a fully_busy group with higher per-CPU capacity can still
be selected as busiest.
Relocate the existing comparison of capacities to occur before comparing
the types of the candidate and busiest groups.
Remove unnecessary parentheses while here.
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
---
kernel/sched/fair.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7e2963efe800..9da5014f8387 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10372,6 +10372,17 @@ static bool update_sd_pick_busiest(struct lb_env *env,
sds->local_stat.group_type != group_has_spare))
return false;
+ /*
+ * Candidate sg has no more than one task per CPU and has higher
+ * per-CPU capacity. Migrating tasks to less capable CPUs may harm
+ * throughput. Maximize throughput, power/energy consequences are not
+ * considered.
+ */
+ if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
+ sgs->group_type <= group_fully_busy &&
+ capacity_greater(sg->sgc->min_capacity, capacity_of(env->dst_cpu)))
+ return false;
+
if (sgs->group_type > busiest->group_type)
return true;
@@ -10474,17 +10485,6 @@ static bool update_sd_pick_busiest(struct lb_env *env,
break;
}
- /*
- * Candidate sg has no more than one task per CPU and has higher
- * per-CPU capacity. Migrating tasks to less capable CPUs may harm
- * throughput. Maximize throughput, power/energy consequences are not
- * considered.
- */
- if ((env->sd->flags & SD_ASYM_CPUCAPACITY) &&
- (sgs->group_type <= group_fully_busy) &&
- (capacity_greater(sg->sgc->min_capacity, capacity_of(env->dst_cpu))))
- return false;
-
return true;
}
--
2.43.0
Powered by blists - more mailing lists