[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210304222944.32504-1-song.bao.hua@hisilicon.com>
Date: Fri, 5 Mar 2021 11:29:44 +1300
From: Barry Song <song.bao.hua@...ilicon.com>
To: <valentin.schneider@....com>, <vincent.guittot@...aro.org>,
<mingo@...hat.com>, <peterz@...radead.org>,
<juri.lelli@...hat.com>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>
CC: <linux-kernel@...r.kernel.org>, <linuxarm@...neuler.org>,
Barry Song <song.bao.hua@...ilicon.com>
Subject: [PATCH] sched/topology: remove redundant cpumask_and in init_overlap_sched_group
mask is built in build_balance_mask() by for_each_cpu(i, sg_span), so
it must be a subset of sched_group_span(sg). Though cpumask_first_and
doesn't lead to a wrong result of balance cpu, it is pointless to do
cpumask_and again.
Signed-off-by: Barry Song <song.bao.hua@...ilicon.com>
---
kernel/sched/topology.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 12f8058..45f3db2 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -934,7 +934,7 @@ static void init_overlap_sched_group(struct sched_domain *sd,
int cpu;
build_balance_mask(sd, sg, mask);
- cpu = cpumask_first_and(sched_group_span(sg), mask);
+ cpu = cpumask_first(mask);
sg->sgc = *per_cpu_ptr(sdd->sgc, cpu);
if (atomic_inc_return(&sg->sgc->ref) == 1)
--
1.8.3.1
Powered by blists - more mailing lists