[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200817113003.20802-11-valentin.schneider@arm.com>
Date: Mon, 17 Aug 2020 12:29:56 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org
Cc: Quentin Perret <qperret@...gle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>, mingo@...nel.org,
peterz@...radead.org, vincent.guittot@...aro.org,
morten.rasmussen@....com
Subject: [PATCH v6 10/17] sched/topology: Propagate SD_ASYM_CPUCAPACITY upwards
We currently set this flag *only* on domains whose topology level exactly
match the level where we detect asymmetry (as returned by
asym_cpu_capacity_level()). This is rather problematic.
Say there are two clusters in the system, one with a lone big CPU and the
other with a mix of big and LITTLE CPUs (as is allowed by DynamIQ):
DIE [ ]
MC [ ][ ]
0 1 2 3 4
L L B B B
asym_cpu_capacity_level() will figure out that the MC level is the one
where all CPUs can see a CPU of max capacity, and we will thus set
SD_ASYM_CPUCAPACITY at MC level for all CPUs.
That lone big CPU will degenerate its MC domain, since it would be alone in
there, and will end up with just a DIE domain. Since the flag was only set
at MC, this CPU ends up not seeing any SD with the flag set, which is
broken.
Rather than clearing dflags at every topology level, clear it before
entering the topology level loop. This will properly propagate upwards
flags that are set starting from a certain level.
Reviewed-by: Quentin Perret <qperret@...gle.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
Signed-off-by: Valentin Schneider <valentin.schneider@....com>
---
include/linux/sched/sd_flags.h | 4 +++-
kernel/sched/topology.c | 3 +--
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h
index 21a43ad6f26a..4f07b405564e 100644
--- a/include/linux/sched/sd_flags.h
+++ b/include/linux/sched/sd_flags.h
@@ -83,9 +83,11 @@ SD_FLAG(SD_WAKE_AFFINE, SDF_SHARED_CHILD)
/*
* Domain members have different CPU capacities
*
+ * SHARED_PARENT: Set from the topmost domain down to the first domain where
+ * asymmetry is detected.
* NEEDS_GROUPS: Per-CPU capacity is asymmetric between groups.
*/
-SD_FLAG(SD_ASYM_CPUCAPACITY, SDF_NEEDS_GROUPS)
+SD_FLAG(SD_ASYM_CPUCAPACITY, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS)
/*
* Domain members share CPU capacity (i.e. SMT)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 00ad7cef2ec1..02fd8db747b2 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1988,11 +1988,10 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
/* Set up domains for CPUs specified by the cpu_map: */
for_each_cpu(i, cpu_map) {
struct sched_domain_topology_level *tl;
+ int dflags = 0;
sd = NULL;
for_each_sd_topology(tl) {
- int dflags = 0;
-
if (tl == tl_asym) {
dflags |= SD_ASYM_CPUCAPACITY;
has_asym = true;
--
2.27.0
Powered by blists - more mailing lists