[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <153143939986.48062.4653224503139250796@swboyd.mtv.corp.google.com>
Date: Thu, 12 Jul 2018 16:49:59 -0700
From: Stephen Boyd <sboyd@...nel.org>
To: "Rafael J. Wysocki" <rjw@...ysocki.net>,
Taniya Das <tdas@...eaurora.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Cc: Rajendra Nayak <rnayak@...eaurora.org>,
Amit Nischal <anischal@...eaurora.org>,
devicetree@...r.kernel.org, robh@...nel.org,
skannan@...eaurora.org, amit.kucheria@...aro.org,
evgreen@...gle.com, Taniya Das <tdas@...eaurora.org>
Subject: Re: [PATCH v5 1/2] dt-bindings: cpufreq: Introduce QCOM CPUFREQ Firmware
bindings
Quoting Taniya Das (2018-07-12 11:05:44)
[..]
> + compatible = "qcom,kryo385";
> + reg = <0x0 0x600>;
> + enable-method = "psci";
> + next-level-cache = <&L2_600>;
> + qcom,freq-domain = <&freq_domain_table1>;
> + L2_600: l2-cache {
> + compatible = "cache";
> + next-level-cache = <&L3_0>;
> + };
> + };
> +
> + CPU7: cpu@700 {
> + device_type = "cpu";
> + compatible = "qcom,kryo385";
> + reg = <0x0 0x700>;
> + enable-method = "psci";
> + next-level-cache = <&L2_700>;
> + qcom,freq-domain = <&freq_domain_table1>;
> + L2_700: l2-cache {
> + compatible = "cache";
> + next-level-cache = <&L3_0>;
> + };
> + };
> + };
> +
> + qcom,cpufreq-hw {
> + compatible = "qcom,cpufreq-hw";
> + #address-cells = <2>;
> + #size-cells = <2>;
> + ranges;
> + freq_domain_table0: freq_table0 {
> + reg = <0 0x17d43000 0 0x1400>;
> + };
> +
> + freq_domain_table1: freq_table1 {
> + reg = <0 0x17d45800 0 0x1400>;
> + };
It seems that we need to map the CPUs in the cpus node to the frequency
domains in the cpufreq-hw node. Wouldn't that be better served via a
#foo-cells and <&phandle foo-cell> property in the CPU node? It's
annoying that the cpufreq-hw node doesn't have a reg property, when it
really should have one that goes over the whole register space (or is
split across the frequency domains so that there are two reg properties
here).
Powered by blists - more mailing lists