[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210219130003.2890-7-valentin.schneider@arm.com>
Date: Fri, 19 Feb 2021 13:00:02 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: linux-kernel@...r.kernel.org
Cc: Qais Yousef <qais.yousef@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Quentin Perret <qperret@...gle.com>,
Pavan Kondeti <pkondeti@...eaurora.org>,
Rik van Riel <riel@...riel.com>,
Lingutla Chandrasekhar <clingutla@...eaurora.org>
Subject: [PATCH v2 6/7] sched/fair: Filter out locally-unsolvable misfit imbalances
Consider the following (hypothetical) asymmetric CPU capacity topology,
with some amount of capacity pressure (RT | DL | IRQ | thermal):
DIE [ ]
MC [ ][ ]
0 1 2 3
| CPU | capacity_orig | capacity |
|-----+---------------+----------|
| 0 | 870 | 860 |
| 1 | 870 | 600 |
| 2 | 1024 | 850 |
| 3 | 1024 | 860 |
If CPU1 has a misfit task, then CPU0, CPU2 and CPU3 are valid candidates to
grant the task an uplift in CPU capacity. Consider CPU0 and CPU3 as
sufficiently busy, i.e. don't have enough spare capacity to accommodate
CPU1's misfit task. This would then fall on CPU2 to pull the task.
This currently won't happen, because CPU2 will fail
capacity_greater(capacity_of(CPU2), sg->sgc->max_capacity)
in update_sd_pick_busiest(), where 'sg' is the [0, 1] group at DIE
level. In this case, the max_capacity is that of CPU0's, which is at this
point in time greater than that of CPU2's. This comparison doesn't make
much sense, given that the only CPUs we should care about in this scenario
are CPU1 (the CPU with the misfit task) and CPU2 (the load-balance
destination CPU).
Aggregate a misfit task's load into sgs->group_misfit_task_load only if
env->dst_cpu would grant it a capacity uplift. Separately track whether a
sched_group contains a misfit task to still classify it as
group_misfit_task and not pick it as busiest group when pulling from a
lower-capacity CPU (which is the current behaviour and prevents
down-migration).
Since find_busiest_queue() can now iterate over CPUs with a higher capacity
than the local CPU's, add a capacity check there.
Reviewed-by: Qais Yousef <qais.yousef@....com>
Signed-off-by: Valentin Schneider <valentin.schneider@....com>
---
kernel/sched/fair.c | 39 ++++++++++++++++++++++++++++++---------
1 file changed, 30 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index af5ce083c982..ee172b384e29 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5747,6 +5747,12 @@ static unsigned long capacity_of(int cpu)
return cpu_rq(cpu)->cpu_capacity;
}
+/* Is CPU a's capacity noticeably greater than CPU b's? */
+static inline bool cpu_capacity_greater(int a, int b)
+{
+ return capacity_greater(capacity_of(a), capacity_of(b));
+}
+
static void record_wakee(struct task_struct *p)
{
/*
@@ -8061,7 +8067,8 @@ struct sg_lb_stats {
unsigned int group_weight;
enum group_type group_type;
unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
- unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
+ unsigned long group_misfit_task_load; /* Task load that can be uplifted */
+ int group_has_misfit_task; /* A CPU has a task too big for its capacity */
#ifdef CONFIG_NUMA_BALANCING
unsigned int nr_numa_running;
unsigned int nr_preferred_running;
@@ -8334,7 +8341,7 @@ group_type group_classify(unsigned int imbalance_pct,
if (sgs->group_asym_packing)
return group_asym_packing;
- if (sgs->group_misfit_task_load)
+ if (sgs->group_has_misfit_task)
return group_misfit_task;
if (!group_has_capacity(imbalance_pct, sgs))
@@ -8420,10 +8427,21 @@ static inline void update_sg_lb_stats(struct lb_env *env,
continue;
/* Check for a misfit task on the cpu */
- if (sd_has_asym_cpucapacity(env->sd) &&
- sgs->group_misfit_task_load < rq->misfit_task_load) {
- sgs->group_misfit_task_load = rq->misfit_task_load;
- *sg_status |= SG_OVERLOAD;
+ if (!sd_has_asym_cpucapacity(env->sd) ||
+ !rq->misfit_task_load)
+ continue;
+
+ *sg_status |= SG_OVERLOAD;
+ sgs->group_has_misfit_task = true;
+
+ /*
+ * Don't attempt to maximize load for misfit tasks that can't be
+ * granted a CPU capacity uplift.
+ */
+ if (cpu_capacity_greater(env->dst_cpu, i)) {
+ sgs->group_misfit_task_load = max(
+ sgs->group_misfit_task_load,
+ rq->misfit_task_load);
}
}
@@ -8474,7 +8492,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
/* Don't try to pull misfit tasks we can't help */
if (static_branch_unlikely(&sched_asym_cpucapacity) &&
sgs->group_type == group_misfit_task &&
- (!capacity_greater(capacity_of(env->dst_cpu), sg->sgc->max_capacity) ||
+ (!sgs->group_misfit_task_load ||
sds->local_stat.group_type != group_has_spare))
return false;
@@ -9434,15 +9452,18 @@ static struct rq *find_busiest_queue(struct lb_env *env,
case migrate_misfit:
/*
* For ASYM_CPUCAPACITY domains with misfit tasks we
- * simply seek the "biggest" misfit task.
+ * simply seek the "biggest" misfit task we can
+ * accommodate.
*/
+ if (!cpu_capacity_greater(env->dst_cpu, i))
+ continue;
+
if (rq->misfit_task_load > busiest_load) {
busiest_load = rq->misfit_task_load;
busiest = rq;
}
break;
-
}
}
--
2.27.0
Powered by blists - more mailing lists