[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <035fe13689dad6d3867a1d33f7d5e91d4637d14a.1657695140.git.viresh.kumar@linaro.org>
Date: Wed, 13 Jul 2022 12:22:56 +0530
From: Viresh Kumar <viresh.kumar@...aro.org>
To: Bjorn Andersson <bjorn.andersson@...aro.org>,
Manivannan Sadhasivam <mani@...nel.org>,
Andy Gross <agross@...nel.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
Johan Hovold <johan@...nel.org>,
Rob Herring <robh+dt@...nel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
linux-pm@...r.kernel.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [RFC PATCH 1/4] dt-bindings: cpufreq-qcom-hw: Move clocks to CPU nodes
cpufreq-hw is a hardware engine, which takes care of frequency
management for CPUs. The engine manages the clocks for CPU devices, but
it isn't the end consumer of the clocks, which are the CPUs in this
case.
For this reason, it looks incorrect to keep the clock related properties
in the cpufreq-hw node. They should really be present at the end user,
i.e. the CPUs.
The case was simple currently as all the devices, i.e. the CPUs, that
the engine manages share the same clock names. What if the clock names
are different for different CPUs or clusters ? How will keeping the
clock properties in the cpufreq-hw node work in that case ?
This design creates further problems for frameworks like OPP, which
expects all such details (clocks) to be present in the end device node
itself, instead of another related node.
Move the clocks properties to the node that uses them instead.
Signed-off-by: Viresh Kumar <viresh.kumar@...aro.org>
---
.../bindings/cpufreq/cpufreq-qcom-hw.yaml | 31 ++++++++++---------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml b/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml
index 2f1b8b6852a0..2ef4eeeca9b9 100644
--- a/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml
+++ b/Documentation/devicetree/bindings/cpufreq/cpufreq-qcom-hw.yaml
@@ -42,24 +42,12 @@ description: |
- const: freq-domain1
- const: freq-domain2
- clocks:
- items:
- - description: XO Clock
- - description: GPLL0 Clock
-
- clock-names:
- items:
- - const: xo
- - const: alternate
-
'#freq-domain-cells':
const: 1
required:
- compatible
- reg
- - clocks
- - clock-names
- '#freq-domain-cells'
additionalProperties: false
@@ -81,6 +69,8 @@ additionalProperties: false
reg = <0x0 0x0>;
enable-method = "psci";
next-level-cache = <&L2_0>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
L2_0: l2-cache {
compatible = "cache";
@@ -97,6 +87,8 @@ additionalProperties: false
reg = <0x0 0x100>;
enable-method = "psci";
next-level-cache = <&L2_100>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
L2_100: l2-cache {
compatible = "cache";
@@ -110,6 +102,8 @@ additionalProperties: false
reg = <0x0 0x200>;
enable-method = "psci";
next-level-cache = <&L2_200>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
L2_200: l2-cache {
compatible = "cache";
@@ -123,6 +117,8 @@ additionalProperties: false
reg = <0x0 0x300>;
enable-method = "psci";
next-level-cache = <&L2_300>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
L2_300: l2-cache {
compatible = "cache";
@@ -136,6 +132,8 @@ additionalProperties: false
reg = <0x0 0x400>;
enable-method = "psci";
next-level-cache = <&L2_400>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
L2_400: l2-cache {
compatible = "cache";
@@ -149,6 +147,8 @@ additionalProperties: false
reg = <0x0 0x500>;
enable-method = "psci";
next-level-cache = <&L2_500>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
L2_500: l2-cache {
compatible = "cache";
@@ -162,6 +162,8 @@ additionalProperties: false
reg = <0x0 0x600>;
enable-method = "psci";
next-level-cache = <&L2_600>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
L2_600: l2-cache {
compatible = "cache";
@@ -175,6 +177,8 @@ additionalProperties: false
reg = <0x0 0x700>;
enable-method = "psci";
next-level-cache = <&L2_700>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
L2_700: l2-cache {
compatible = "cache";
@@ -192,9 +196,6 @@ additionalProperties: false
reg = <0x17d43000 0x1400>, <0x17d45800 0x1400>;
reg-names = "freq-domain0", "freq-domain1";
- clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
- clock-names = "xo", "alternate";
-
#freq-domain-cells = <1>;
};
};
--
2.31.1.272.g89b43f80a514
Powered by blists - more mailing lists