lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e3a4bc21-c334-4d48-90b5-aab8d187939e@nvidia.com>
Date:   Sat, 1 Aug 2020 17:46:43 +0530
From:   Sumit Gupta <sumitg@...dia.com>
To:     Sudeep Holla <sudeep.holla@....com>
CC:     Kefeng Wang <wangkefeng.wang@...wei.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Mikko Perttunen <mperttunen@...dia.com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Hulk Robot <hulkci@...wei.com>,
        "linux-kernel@...r.kernel.org List" <linux-kernel@...r.kernel.org>,
        <linux-arm-kernel@...ts.infradead.org>,
        "Bibek Basu" <bbasu@...dia.com>, Sumit Gupta <sumitg@...dia.com>,
        linux-tegra <linux-tegra@...r.kernel.org>,
        Thierry Reding <thierry.reding@...il.com>,
        "Jon Hunter" <jonathanh@...dia.com>
Subject: Re: [PATCH -next] arm64: Export __cpu_logical_map


>>>>> ERROR: modpost: "__cpu_logical_map" [drivers/cpufreq/tegra194-cpufreq.ko] undefined!
>>>>>
>>>>> ARM64 tegra194-cpufreq driver use cpu_logical_map, export
>>>>> __cpu_logical_map to fix build issue.
>>>>>
>>>
>>> I wonder why like other instances in the drivers, the mpidr is not get
>>> directly from the cpu. The cpufreq_driver->init call happens when the cpu
>>> is being brought online and is executed on the required cpu IIUC.
>>>
>> Yes, this occurs during hotplug case.
>> But in the case of system boot, 'cpufreq_driver->init' is called later
>> during cpufreq platform driver's probe. The value of CPU in 'policy->cpu'
>> can be different from the current CPU. That's why read_cpuid_mpidr() can't
>> be used.
>>
> 
> Fair enough, why not do cross call like in set_target ? Since it is one-off
> in init, I don't see any issue when you are doing it runtime for set_target.
> 
>>> read_cpuid_mpidr() is inline and avoids having to export the logical_cpu_map.
>>> Though we may not add physical hotplug anytime soon, less dependency
>>> on this cpu_logical_map is better given that we can resolve this without
>>> the need to access the map.
>>>
> 
> To be honest, we have tried to remove all the dependency on cluster id
> in generic code as it is not well defined. This one is tegra specific
> driver so should be fine. But I am still bit nervous to export
> cpu_logical_map as we have no clue what that would mean for physical
> hotplug.
> 
As suggested, I have done below change to get the cluster number using 
read_cpuid_mpidr(). Please review and suggest if this looks ok?
I will send formal patch if the change is fine.

Thanks,
Sumit

----

diff --git a/drivers/cpufreq/tegra194-cpufreq.c 
b/drivers/cpufreq/tegra194-cpufreq.c
index bae527e..06f5ccf 100644
--- a/drivers/cpufreq/tegra194-cpufreq.c
+++ b/drivers/cpufreq/tegra194-cpufreq.c
@@ -56,9 +56,11 @@ struct read_counters_work {

  static struct workqueue_struct *read_counters_wq;

-static enum cluster get_cpu_cluster(u8 cpu)
+static void get_cpu_cluster(void *cluster)
  {
-       return MPIDR_AFFINITY_LEVEL(cpu_logical_map(cpu), 1);
+       u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
+
+       *((uint32_t *) cluster) = MPIDR_AFFINITY_LEVEL(mpidr, 1);
  }

  /*
@@ -186,8 +188,10 @@ static unsigned int tegra194_get_speed(u32 cpu)
  static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
  {
         struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
-       int cl = get_cpu_cluster(policy->cpu);
         u32 cpu;
+       u32 cl;
+
+       smp_call_function_single(policy->cpu, get_cpu_cluster, &cl, true);


> 
> --
> Regards,
> Sudeep
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ