lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0e1cf1c3-3790-3032-2843-04a112de1411@arm.com>
Date:   Thu, 28 Nov 2019 11:01:43 +0100
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Viresh Kumar <viresh.kumar@...aro.org>,
        Sudeep Holla <sudeep.holla@....com>
Cc:     "Rafael J . Wysocki" <rjw@...ysocki.net>,
        Liviu Dudau <liviu.dudau@....com>,
        Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Lukasz Luba <lukasz.luba@....com>, linux-pm@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] cpufreq: vexpress-spc: Fix wrong alternation of
 policy->related_cpus during CPU hp

On 28/11/2019 03:31, Viresh Kumar wrote:
> On 27-11-19, 15:40, Sudeep Holla wrote:
>> diff --git i/arch/arm/mach-vexpress/spc.c w/arch/arm/mach-vexpress/spc.c
>> index 354e0e7025ae..e0e2e789a0b7 100644
>> --- i/arch/arm/mach-vexpress/spc.c
>> +++ w/arch/arm/mach-vexpress/spc.c
>> @@ -551,8 +551,9 @@ static struct clk *ve_spc_clk_register(struct device *cpu_dev)
>>
>>  static int __init ve_spc_clk_init(void)
>>  {
>> -       int cpu;
>> +       int cpu, cluster;
>>         struct clk *clk;
>> +       bool init_opp_table[MAX_CLUSTERS] = { false };
>>
>>         if (!info)
>>                 return 0; /* Continue only if SPC is initialised */
>> @@ -578,8 +579,17 @@ static int __init ve_spc_clk_init(void)
>>                         continue;
>>                 }
>>
>> +               cluster = topology_physical_package_id(cpu_dev->id);
>> +               if (init_opp_table[cluster])
>> +                       continue;
>> +
>>                 if (ve_init_opp_table(cpu_dev))
>>                         pr_warn("failed to initialise cpu%d opp table\n", cpu);
>> +               else if (dev_pm_opp_set_sharing_cpus(cpu_dev,
>> +                        topology_core_cpumask(cpu_dev->id)))
>> +                       pr_warn("failed to mark OPPs shared for cpu%d\n", cpu);
>> +
>> +               init_opp_table[cluster] = true;
>>         }
>>
>>         platform_device_register_simple("vexpress-spc-cpufreq", -1, NULL, 0);
>> diff --git i/drivers/cpufreq/vexpress-spc-cpufreq.c w/drivers/cpufreq/vexpress-spc-cpufreq.c
>> index 506e3f2bf53a..83c85d3d67e3 100644
>> --- i/drivers/cpufreq/vexpress-spc-cpufreq.c
>> +++ w/drivers/cpufreq/vexpress-spc-cpufreq.c
>> @@ -434,7 +434,7 @@ static int ve_spc_cpufreq_init(struct cpufreq_policy *policy)
>>         if (cur_cluster < MAX_CLUSTERS) {
>>                 int cpu;
>>
>> -               cpumask_copy(policy->cpus, topology_core_cpumask(policy->cpu));
>> +               dev_pm_opp_get_sharing_cpus(cpu_dev, policy->cpus);
>>
>>                 for_each_cpu(cpu, policy->cpus)
>>                         per_cpu(physical_cluster, cpu) = cur_cluster;
> 
> This is a better *work-around* I would say, as we can't break it the
> way I explained earlier :)

I do agree. Tested CPU hp stress on TC2 and it looks good.

Tested-by: Dietmar Eggemann <dietmar.eggemann@....com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ