[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220123183925.1052919-48-yury.norov@gmail.com>
Date: Sun, 23 Jan 2022 10:39:18 -0800
From: Yury Norov <yury.norov@...il.com>
To: Yury Norov <yury.norov@...il.com>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Andrew Morton <akpm@...ux-foundation.org>,
Michał Mirosław <mirq-linux@...e.qmqm.pl>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Peter Zijlstra <peterz@...radead.org>,
David Laight <David.Laight@...lab.com>,
Joe Perches <joe@...ches.com>, Dennis Zhou <dennis@...nel.org>,
Emil Renner Berthing <kernel@...il.dk>,
Nicholas Piggin <npiggin@...il.com>,
Matti Vaittinen <matti.vaittinen@...rohmeurope.com>,
Alexey Klimov <aklimov@...hat.com>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: [PATCH 47/54] sched: replace cpumask_weight with cpumask_weight_eq where appropriate
kernel/sched code uses cpumask_weight() to compare the weight of
cpumask with a given number. We can do it more efficiently with
cpumask_weight_eq because conditional cpumask_weight may stop
traversing the cpumask earlier, as soon as condition is met.
Signed-off-by: Yury Norov <yury.norov@...il.com>
---
kernel/sched/core.c | 8 ++++----
kernel/sched/topology.c | 2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 918d0bdc2ea8..7494f51a3608 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6006,7 +6006,7 @@ static void sched_core_cpu_starting(unsigned int cpu)
WARN_ON_ONCE(rq->core != rq);
/* if we're the first, we'll be our own leader */
- if (cpumask_weight(smt_mask) == 1)
+ if (cpumask_weight_eq(smt_mask, 1))
goto unlock;
/* find the leader */
@@ -6047,7 +6047,7 @@ static void sched_core_cpu_deactivate(unsigned int cpu)
sched_core_lock(cpu, &flags);
/* if we're the last man standing, nothing to do */
- if (cpumask_weight(smt_mask) == 1) {
+ if (cpumask_weight_eq(smt_mask, 1)) {
WARN_ON_ONCE(rq->core != rq);
goto unlock;
}
@@ -9045,7 +9045,7 @@ int sched_cpu_activate(unsigned int cpu)
/*
* When going up, increment the number of cores with SMT present.
*/
- if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+ if (cpumask_weight_eq(cpu_smt_mask(cpu), 2))
static_branch_inc_cpuslocked(&sched_smt_present);
#endif
set_cpu_active(cpu, true);
@@ -9120,7 +9120,7 @@ int sched_cpu_deactivate(unsigned int cpu)
/*
* When going down, decrement the number of cores with SMT present.
*/
- if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+ if (cpumask_weight_eq(cpu_smt_mask(cpu), 2))
static_branch_dec_cpuslocked(&sched_smt_present);
sched_core_cpu_deactivate(cpu);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 8478e2a8cd65..79395571599f 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -169,7 +169,7 @@ static const unsigned int SD_DEGENERATE_GROUPS_MASK =
static int sd_degenerate(struct sched_domain *sd)
{
- if (cpumask_weight(sched_domain_span(sd)) == 1)
+ if (cpumask_weight_eq(sched_domain_span(sd), 1))
return 1;
/* Following flags need at least 2 groups */
--
2.30.2
Powered by blists - more mailing lists