[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191127154029.GA4826@bogus>
Date: Wed, 27 Nov 2019 15:40:29 +0000
From: Sudeep Holla <sudeep.holla@....com>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Viresh Kumar <viresh.kumar@...aro.org>,
"Rafael J . Wysocki" <rjw@...ysocki.net>,
Liviu Dudau <liviu.dudau@....com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Sudeep Holla <sudeep.holla@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Lukasz Luba <lukasz.luba@....com>, linux-pm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] cpufreq: vexpress-spc: Fix wrong alternation of
policy->related_cpus during CPU hp
On Wed, Nov 27, 2019 at 03:58:49PM +0100, Dietmar Eggemann wrote:
> On 27/11/2019 14:32, Sudeep Holla wrote:
[...]
> >
> > Yes but there's no architectural way. I need to revise and see tc2_pm.c
> > to check if we can do any magic there.
>
> I'm fine with finding a better solution to return a fixed topology core
> cpumask or calling this patch a workaround. AFAICS, only TC2 is affected.
>
> ("arm: Fix topology setup in case of CPU hotplug for CONFIG_SCHED_MC")
> is needed for other systems as well in case we have commit ca74b316df96
> ("arm: Use common cpu_topology structure and functions."). We probably
> don't want to revert commit ca74b316df96?
>
Correct
> We do CPU hp stress tests in our EAS mainline integration test suite
> https://developer.arm.com/tools-and-software/open-source-software/linux-kernel/energy-aware-scheduling/eas-mainline-development
> and there is where we initially encountered this issue on TC2.
I could come up with the patch below. If this is any cleaner and acceptable
I am happy to post it. One advantage of moving the use of topology_core_cpumask
inside ve_spc_clk_init is that it's just device_initcall and not a module.
It allows to handle ve_spc_cpufreq as module. I prefer this than the
previous solution/workaround. Let me know.
Regards,
Sudeep
-->8
diff --git i/arch/arm/mach-vexpress/spc.c w/arch/arm/mach-vexpress/spc.c
index 354e0e7025ae..e0e2e789a0b7 100644
--- i/arch/arm/mach-vexpress/spc.c
+++ w/arch/arm/mach-vexpress/spc.c
@@ -551,8 +551,9 @@ static struct clk *ve_spc_clk_register(struct device *cpu_dev)
static int __init ve_spc_clk_init(void)
{
- int cpu;
+ int cpu, cluster;
struct clk *clk;
+ bool init_opp_table[MAX_CLUSTERS] = { false };
if (!info)
return 0; /* Continue only if SPC is initialised */
@@ -578,8 +579,17 @@ static int __init ve_spc_clk_init(void)
continue;
}
+ cluster = topology_physical_package_id(cpu_dev->id);
+ if (init_opp_table[cluster])
+ continue;
+
if (ve_init_opp_table(cpu_dev))
pr_warn("failed to initialise cpu%d opp table\n", cpu);
+ else if (dev_pm_opp_set_sharing_cpus(cpu_dev,
+ topology_core_cpumask(cpu_dev->id)))
+ pr_warn("failed to mark OPPs shared for cpu%d\n", cpu);
+
+ init_opp_table[cluster] = true;
}
platform_device_register_simple("vexpress-spc-cpufreq", -1, NULL, 0);
diff --git i/drivers/cpufreq/vexpress-spc-cpufreq.c w/drivers/cpufreq/vexpress-spc-cpufreq.c
index 506e3f2bf53a..83c85d3d67e3 100644
--- i/drivers/cpufreq/vexpress-spc-cpufreq.c
+++ w/drivers/cpufreq/vexpress-spc-cpufreq.c
@@ -434,7 +434,7 @@ static int ve_spc_cpufreq_init(struct cpufreq_policy *policy)
if (cur_cluster < MAX_CLUSTERS) {
int cpu;
- cpumask_copy(policy->cpus, topology_core_cpumask(policy->cpu));
+ dev_pm_opp_get_sharing_cpus(cpu_dev, policy->cpus);
for_each_cpu(cpu, policy->cpus)
per_cpu(physical_cluster, cpu) = cur_cluster;
Powered by blists - more mailing lists