[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240902183609.1683756-4-yury.norov@gmail.com>
Date: Mon, 2 Sep 2024 11:36:07 -0700
From: Yury Norov <yury.norov@...il.com>
To: linux-kernel@...r.kernel.org,
Christophe JAILLET <christophe.jaillet@...adoo.fr>
Cc: Yury Norov <yury.norov@...il.com>,
Chen Yu <yu.c.chen@...el.com>,
Leonardo Bras <leobras@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>
Subject: [PATCH v3 3/3] sched/topology: reorganize topology_span_sane() checking order
The function currently makes 3 checks:
1. mc == mi;
2. cpumask_equal(mc, mi);
3. cpumask_intersects(mc, mi).
Historically, 2 last checks build a single condition for if() statement.
Logically, #1 and #2 should be tested together, because for the topology
sanity checking purposes, they do the same thing. In contrast, #3 tests
for intersection, which is a different logical unit.
This patch creates a helper for #1 and #2 and puts the corresponding
comment on top of the helper; unloading the main topology_span_sane().
Signed-off-by: Yury Norov <yury.norov@...il.com>
---
kernel/sched/topology.c | 31 ++++++++++++++++++-------------
1 file changed, 18 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 04a3b3d7b6f4..bbbe7955d37c 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2346,6 +2346,22 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
return sd;
}
+/*
+ * Some topology levels (e.g. PKG in default_topology[]) have a
+ * sched_domain_mask_f implementation that reuses the same mask for
+ * several CPUs (in PKG's case, one mask * for all CPUs in the same
+ * NUMA node).
+ *
+ * For such topology levels, repeating cpumask_equal() checks is
+ * wasteful. Instead, we first check that the tl->mask(i) pointers
+ * aren't the same.
+ */
+static inline bool topology_cpumask_equal(const struct cpumask *m1,
+ const struct cpumask *m2)
+{
+ return m1 == m2 || cpumask_equal(m1, m2);
+}
+
/*
* Ensure topology masks are sane, i.e. there are no conflicts (overlaps) for
* any two given CPUs at this (non-NUMA) topology level.
@@ -2369,18 +2385,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
*/
for_each_cpu_from(cpu, cpu_map) {
mi = tl->mask(cpu);
-
- /*
- * Some topology levels (e.g. PKG in default_topology[])
- * have a sched_domain_mask_f implementation that reuses
- * the same mask for several CPUs (in PKG's case, one mask
- * for all CPUs in the same NUMA node).
- *
- * For such topology levels, repeating cpumask_equal()
- * checks is wasteful. Instead, we first check that the
- * tl->mask(i) pointers aren't the same.
- */
- if (mi == mc)
+ if (topology_cpumask_equal(mc, mi))
continue;
/*
@@ -2389,7 +2394,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
* remove CPUs, which only lessens our ability to detect
* overlaps
*/
- if (!cpumask_equal(mc, mi) && cpumask_intersects(mc, mi))
+ if (cpumask_intersects(mc, mi))
return false;
}
--
2.43.0
Powered by blists - more mailing lists