[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230207051105.11575-9-ricardo.neri-calderon@linux.intel.com>
Date: Mon, 6 Feb 2023 21:10:49 -0800
From: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To: "Peter Zijlstra (Intel)" <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ricardo Neri <ricardo.neri@...el.com>,
"Ravi V. Shankar" <ravi.v.shankar@...el.com>,
Ben Segall <bsegall@...gle.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Len Brown <len.brown@...el.com>, Mel Gorman <mgorman@...e.de>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Steven Rostedt <rostedt@...dmis.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Valentin Schneider <vschneid@...hat.com>,
Lukasz Luba <lukasz.luba@....com>,
Ionela Voinescu <ionela.voinescu@....com>, x86@...nel.org,
"Joel Fernandes (Google)" <joel@...lfernandes.org>,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
"Tim C . Chen" <tim.c.chen@...el.com>
Subject: [PATCH v3 08/24] sched/fair: Use IPCC stats to break ties between asym_packing sched groups
As it iterates, update_sd_pick_busiest() keeps on selecting as busiest
sched groups of identical priority. Since both groups have the same
priority, either group is a good choice. The IPCC statistics provide a
measure of the throughput before and after load balance. Use them to
pick a busiest scheduling group from otherwise identical asym_packing
scheduling groups.
Pick as busiest the scheduling group that yields a higher IPCC score
after load balancing.
Cc: Ben Segall <bsegall@...gle.com>
Cc: Daniel Bristot de Oliveira <bristot@...hat.com>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Ionela Voinescu <ionela.voinescu@....com>
Cc: Joel Fernandes (Google) <joel@...lfernandes.org>
Cc: Len Brown <len.brown@...el.com>
Cc: Lukasz Luba <lukasz.luba@....com>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
Cc: Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Tim C. Chen <tim.c.chen@...el.com>
Cc: Valentin Schneider <vschneid@...hat.com>
Cc: x86@...nel.org
Cc: linux-pm@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
---
Changes since v2:
* None
Changes since v1:
* Added a comment to clarify why sched_asym_prefer() needs a tie breaker
only in update_sd_pick_busiest(). (PeterZ)
* Renamed functions for accuracy:
sched_asym_class_prefer() >> sched_asym_ipcc_prefer()
sched_asym_class_pick() >> sched_asym_ipcc_pick()
* Reworded commit message for clarity.
---
kernel/sched/fair.c | 72 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 72 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b6165aa8a376..841927b9b192 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9345,6 +9345,60 @@ static void update_sg_lb_stats_scores(struct sg_lb_stats *sgs,
sgs->ipcc_score_before = before;
}
+/**
+ * sched_asym_ipcc_prefer - Select a sched group based on its IPCC score
+ * @a: Load balancing statistics of a sched group
+ * @b: Load balancing statistics of a second sched group
+ *
+ * Returns: true if @a has a higher IPCC score than @b after load balance.
+ * False otherwise.
+ */
+static bool sched_asym_ipcc_prefer(struct sg_lb_stats *a,
+ struct sg_lb_stats *b)
+{
+ if (!sched_ipcc_enabled())
+ return false;
+
+ /* @a increases overall throughput after load balance. */
+ if (a->ipcc_score_after > b->ipcc_score_after)
+ return true;
+
+ /*
+ * If @a and @b yield the same overall throughput, pick @a if
+ * its current throughput is lower than that of @b.
+ */
+ if (a->ipcc_score_after == b->ipcc_score_after)
+ return a->ipcc_score_before < b->ipcc_score_before;
+
+ return false;
+}
+
+/**
+ * sched_asym_ipcc_pick - Select a sched group based on its IPCC score
+ * @a: A scheduling group
+ * @b: A second scheduling group
+ * @a_stats: Load balancing statistics of @a
+ * @b_stats: Load balancing statistics of @b
+ *
+ * Returns: true if @a has the same priority and @a has tasks with IPC classes
+ * that yield higher overall throughput after load balance. False otherwise.
+ */
+static bool sched_asym_ipcc_pick(struct sched_group *a,
+ struct sched_group *b,
+ struct sg_lb_stats *a_stats,
+ struct sg_lb_stats *b_stats)
+{
+ /*
+ * Only use the class-specific preference selection if both sched
+ * groups have the same priority.
+ */
+ if (arch_asym_cpu_priority(a->asym_prefer_cpu) !=
+ arch_asym_cpu_priority(b->asym_prefer_cpu))
+ return false;
+
+ return sched_asym_ipcc_prefer(a_stats, b_stats);
+}
+
#else /* CONFIG_IPC_CLASSES */
static void update_sg_lb_ipcc_stats(int dst_cpu, struct sg_lb_stats *sgs,
struct rq *rq)
@@ -9361,6 +9415,14 @@ static void update_sg_lb_stats_scores(struct sg_lb_stats *sgs,
{
}
+static bool sched_asym_ipcc_pick(struct sched_group *a,
+ struct sched_group *b,
+ struct sg_lb_stats *a_stats,
+ struct sg_lb_stats *b_stats)
+{
+ return false;
+}
+
#endif /* CONFIG_IPC_CLASSES */
/**
@@ -9596,6 +9658,16 @@ static bool update_sd_pick_busiest(struct lb_env *env,
/* Prefer to move from lowest priority CPU's work */
if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
return false;
+
+ /*
+ * Unlike other callers of sched_asym_prefer(), here both @sg
+ * and @sds::busiest have tasks running. When they have equal
+ * priority, their IPC class scores can be used to select a
+ * better busiest.
+ */
+ if (sched_asym_ipcc_pick(sds->busiest, sg, &sds->busiest_stat, sgs))
+ return false;
+
break;
case group_misfit_task:
--
2.25.1
Powered by blists - more mailing lists