[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240427-opp_support-v12-4-f6beb0a1f2fc@quicinc.com>
Date: Sat, 27 Apr 2024 07:22:37 +0530
From: Krishna chaitanya chundru <quic_krichai@...cinc.com>
To: Bjorn Andersson <andersson@...nel.org>,
Konrad Dybcio
<konrad.dybcio@...aro.org>,
Rob Herring <robh@...nel.org>,
"Krzysztof
Kozlowski" <krzysztof.kozlowski+dt@...aro.org>,
Conor Dooley
<conor+dt@...nel.org>,
Manivannan Sadhasivam
<manivannan.sadhasivam@...aro.org>,
Lorenzo Pieralisi
<lpieralisi@...nel.org>,
Krzysztof WilczyĆski
<kw@...ux.com>,
Bjorn Helgaas <bhelgaas@...gle.com>, <johan+linaro@...nel.org>,
<bmasney@...hat.com>, <djakov@...nel.org>
CC: <linux-arm-msm@...r.kernel.org>, <devicetree@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <linux-pci@...r.kernel.org>,
<vireshk@...nel.org>, <quic_vbadigan@...cinc.com>,
<quic_skananth@...cinc.com>, <quic_nitegupt@...cinc.com>,
<quic_parass@...cinc.com>, <quic_krichai@...cinc.com>,
<krzysztof.kozlowski@...aro.org>
Subject: [PATCH v12 4/6] arm64: dts: qcom: sm8450: Add OPP table support to
PCIe
PCIe host controller driver needs to choose the appropriate performance
state of RPMh power domain and interconnect bandwidth based on the PCIe
data rate.
Hence, add the OPP table support to specify RPMh performance states and
interconnect peak bandwidth.
It should be noted that the different link configurations may share the
same aggregate bandwidth, e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1
link have the same bandwidth and share the same OPP entry.
Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
Signed-off-by: Krishna chaitanya chundru <quic_krichai@...cinc.com>
---
arch/arm64/boot/dts/qcom/sm8450.dtsi | 77 ++++++++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
index 615296e13c43..2e047aba220b 100644
--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
@@ -1855,7 +1855,35 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
pinctrl-names = "default";
pinctrl-0 = <&pcie0_default_state>;
+ operating-points-v2 = <&pcie0_opp_table>;
+
status = "disabled";
+
+ pcie0_opp_table: opp-table {
+ compatible = "operating-points-v2";
+
+ /* GEN 1 x1 */
+ opp-2500000 {
+ opp-hz = /bits/ 64 <2500000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <250000 1>;
+ };
+
+ /* GEN 2 x1 */
+ opp-5000000 {
+ opp-hz = /bits/ 64 <5000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <500000 1>;
+ };
+
+ /* GEN 3 x1 */
+ opp-8000000 {
+ opp-hz = /bits/ 64 <8000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <984500 1>;
+ };
+ };
+
};
pcie0_phy: phy@...6000 {
@@ -1982,7 +2010,56 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
pinctrl-names = "default";
pinctrl-0 = <&pcie1_default_state>;
+ operating-points-v2 = <&pcie1_opp_table>;
+
status = "disabled";
+
+ pcie1_opp_table: opp-table {
+ compatible = "operating-points-v2";
+
+ /* GEN 1 x1 */
+ opp-2500000 {
+ opp-hz = /bits/ 64 <2500000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <250000 1>;
+ };
+
+ /* GEN 1 x2 and GEN 2 x1 */
+ opp-5000000 {
+ opp-hz = /bits/ 64 <5000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <500000 1>;
+ };
+
+ /* GEN 2 x2 */
+ opp-10000000 {
+ opp-hz = /bits/ 64 <10000000>;
+ required-opps = <&rpmhpd_opp_low_svs>;
+ opp-peak-kBps = <1000000 1>;
+ };
+
+ /* GEN 3 x1 */
+ opp-8000000 {
+ opp-hz = /bits/ 64 <8000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <984500 1>;
+ };
+
+ /* GEN 3 x2 and GEN 4 x1 */
+ opp-16000000 {
+ opp-hz = /bits/ 64 <16000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <1969000 1>;
+ };
+
+ /* GEN 4 x2 */
+ opp-32000000 {
+ opp-hz = /bits/ 64 <32000000>;
+ required-opps = <&rpmhpd_opp_nom>;
+ opp-peak-kBps = <3938000 1>;
+ };
+ };
+
};
pcie1_phy: phy@...e000 {
--
2.42.0
Powered by blists - more mailing lists