lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 06 Aug 2018 13:54:24 -0700
From:   skannan@...eaurora.org
To:     Stephen Boyd <sboyd@...nel.org>
Cc:     "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Taniya Das <tdas@...eaurora.org>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        Rajendra Nayak <rnayak@...eaurora.org>,
        Amit Nischal <anischal@...eaurora.org>,
        devicetree@...r.kernel.org, robh@...nel.org,
        amit.kucheria@...aro.org, evgreen@...gle.com
Subject: Re: [PATCH v7 1/2] dt-bindings: cpufreq: Introduce QCOM CPUFREQ
 Firmware bindings

On 2018-08-03 16:46, Stephen Boyd wrote:
> Quoting Taniya Das (2018-07-24 03:42:49)
>> diff --git 
>> a/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.txt 
>> b/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.txt
>> new file mode 100644
>> index 0000000..22d4355
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.txt
>> @@ -0,0 +1,172 @@
> [...]
>> +
>> +               CPU7: cpu@700 {
>> +                       device_type = "cpu";
>> +                       compatible = "qcom,kryo385";
>> +                       reg = <0x0 0x700>;
>> +                       enable-method = "psci";
>> +                       next-level-cache = <&L2_700>;
>> +                       qcom,freq-domain = <&freq_domain_table1>;
>> +                       L2_700: l2-cache {
>> +                               compatible = "cache";
>> +                               next-level-cache = <&L3_0>;
>> +                       };
>> +               };
>> +       };
>> +
>> +       qcom,cpufreq-hw {
>> +               compatible = "qcom,cpufreq-hw";
>> +
>> +               clocks = <&rpmhcc RPMH_CXO_CLK>;
>> +               clock-names = "xo";
>> +
>> +               #address-cells = <2>;
>> +               #size-cells = <2>;
>> +               ranges;
>> +               freq_domain_table0: freq_table0 {
>> +                       reg = <0 0x17d43000 0 0x1400>;
>> +               };
>> +
>> +               freq_domain_table1: freq_table1 {
>> +                       reg = <0 0x17d45800 0 0x1400>;
>> +               };
> 
> Sorry, this is just not proper DT design. The whole node should have a
> reg property, and it should contain two (or three if we're handling the
> L3 clk domain?) different offsets for the different power clusters. The
> problem seems to still be that we don't have a way to map the CPUs to
> the clk domains they're in provided by this hardware block. Making
> subnodes is not the solution.

The problem is mapping clock domains to logical CPUs that CPUfreq uses. 
The physical CPU to logical CPU mapping can be changed by the kernel 
(even through DT if I'm not mistaken). So we need to have a way to tell 
in DT which physical CPUs are connected to which CPU freq clock domain.

As for subnodes or not, we don't have any strong opinion, but couple of 
other points to consider. Two or more CPUfreq policies might have a 
common frequency table (read from HW), but separate control of 
frequency. So, you also need a way to group frequency table with CPU 
freq policies. If you have a better design, we are open to that 
suggestion.

Thanks,
Saravana

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ