[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <5ffc7b9ed03c6301ac2f710f609282959491b526.1608010334.git.viresh.kumar@linaro.org>
Date: Tue, 15 Dec 2020 11:04:14 +0530
From: Viresh Kumar <viresh.kumar@...aro.org>
To: Ionela Voinescu <ionela.voinescu@....com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>
Cc: Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: [PATCH V3 1/3] arm64: topology: Avoid the have_policy check
Every time I have stumbled upon this routine, I get confused with the
way 'have_policy' is used and I have to dig in to understand why is it
so. Here is an attempt to make it easier to understand, and hopefully it
is an improvement.
The 'have_policy' check was just an optimization to avoid writing
to amu_fie_cpus in case we don't have to, but that optimization itself
is creating more confusion than the real work. Lets just do that if all
the CPUs support AMUs. It is much cleaner that way.
Reviewed-by: Ionela Voinescu <ionela.voinescu@....com>
Signed-off-by: Viresh Kumar <viresh.kumar@...aro.org>
---
V3:
- Added Reviewed by tag.
arch/arm64/kernel/topology.c | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index f6faa697e83e..ebadc73449f9 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -199,14 +199,14 @@ static int freq_inv_set_max_ratio(int cpu, u64 max_rate, u64 ref_rate)
return 0;
}
-static inline bool
+static inline void
enable_policy_freq_counters(int cpu, cpumask_var_t valid_cpus)
{
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
if (!policy) {
pr_debug("CPU%d: No cpufreq policy found.\n", cpu);
- return false;
+ return;
}
if (cpumask_subset(policy->related_cpus, valid_cpus))
@@ -214,8 +214,6 @@ enable_policy_freq_counters(int cpu, cpumask_var_t valid_cpus)
amu_fie_cpus);
cpufreq_cpu_put(policy);
-
- return true;
}
static DEFINE_STATIC_KEY_FALSE(amu_fie_key);
@@ -225,7 +223,6 @@ static int __init init_amu_fie(void)
{
bool invariance_status = topology_scale_freq_invariant();
cpumask_var_t valid_cpus;
- bool have_policy = false;
int ret = 0;
int cpu;
@@ -245,17 +242,12 @@ static int __init init_amu_fie(void)
continue;
cpumask_set_cpu(cpu, valid_cpus);
- have_policy |= enable_policy_freq_counters(cpu, valid_cpus);
+ enable_policy_freq_counters(cpu, valid_cpus);
}
- /*
- * If we are not restricted by cpufreq policies, we only enable
- * the use of the AMU feature for FIE if all CPUs support AMU.
- * Otherwise, enable_policy_freq_counters has already enabled
- * policy cpus.
- */
- if (!have_policy && cpumask_equal(valid_cpus, cpu_present_mask))
- cpumask_or(amu_fie_cpus, amu_fie_cpus, valid_cpus);
+ /* Overwrite amu_fie_cpus if all CPUs support AMU */
+ if (cpumask_equal(valid_cpus, cpu_present_mask))
+ cpumask_copy(amu_fie_cpus, cpu_present_mask);
if (!cpumask_empty(amu_fie_cpus)) {
pr_info("CPUs[%*pbl]: counters will be used for FIE.",
--
2.25.0.rc1.19.g042ed3e048af
Powered by blists - more mailing lists