[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221104180339.GA2079655-robh@kernel.org>
Date: Fri, 4 Nov 2022 13:03:39 -0500
From: Rob Herring <robh@...nel.org>
To: Sibi Sankar <quic_sibis@...cinc.com>
Cc: andersson@...nel.org, krzysztof.kozlowski+dt@...aro.org,
sudeep.holla@....com, cristian.marussi@....com, agross@...nel.org,
linux-arm-msm@...r.kernel.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org, konrad.dybcio@...ainline.org,
quic_avajid@...cinc.com
Subject: Re: [RFC 1/2] dt-bindings: firmware: arm,scmi: Add support for
memlat vendor protocol
On Thu, Nov 03, 2022 at 10:28:31AM +0530, Sibi Sankar wrote:
> Add bindings support for the SCMI QTI memlat (memory latency) vendor
> protocol. The memlat vendor protocol enables the frequency scaling of
> various buses (L3/LLCC/DDR) based on the memory latency governor
> running on the CPUSS Control Processor.
I thought the interconnect binding was what provided details for bus
scaling.
>
> Signed-off-by: Sibi Sankar <quic_sibis@...cinc.com>
> ---
> .../devicetree/bindings/firmware/arm,scmi.yaml | 164 +++++++++++++++++++++
> 1 file changed, 164 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml
> index 1c0388da6721..efc8a5a8bffe 100644
> --- a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml
> +++ b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml
> @@ -189,6 +189,47 @@ properties:
> reg:
> const: 0x18
>
> + protocol@80:
> + type: object
> + properties:
> + reg:
> + const: 0x80
> +
> + qcom,bus-type:
> + $ref: /schemas/types.yaml#/definitions/uint32-array
> + items:
> + minItems: 1
> + description:
> + Identifier of the bus type to be scaled by the memlat protocol.
> +
> + cpu-map:
cpu-map only goes under /cpus node.
> + type: object
> + description:
> + The list of all cpu cluster configurations to be tracked by the memlat protocol
> +
> + patternProperties:
> + '^cluster[0-9]':
> + type: object
> + description:
> + Each cluster node describes the frequency domain associated with the
> + CPUFREQ HW engine and bandwidth requirements of the buses to be scaled.
> +
> + properties:
cpu-map nodes don't have properties.
> + operating-points-v2: true
> +
> + qcom,freq-domain:
Please don't add new users of this. Use the performance-domains binding
instead.
> + $ref: /schemas/types.yaml#/definitions/phandle-array
> + description:
> + Reference to the frequency domain of the CPUFREQ HW engine
> + items:
> + - items:
> + - description: phandle to CPUFREQ HW engine
> + - description: frequency domain associated with the cluster
> +
> + required:
> + - qcom,freq-domain
> + - operating-points-v2
> +
> additionalProperties: false
>
> patternProperties:
> @@ -429,4 +470,127 @@ examples:
> };
> };
>
> + - |
> + #include <dt-bindings/interrupt-controller/arm-gic.h>
> +
> + firmware {
> + scmi {
> + compatible = "arm,scmi";
> +
> + #address-cells = <1>;
> + #size-cells = <0>;
> +
> + mboxes = <&cpucp_mbox>;
> + mbox-names = "tx";
> + shmem = <&cpu_scp_lpri>;
> +
> + scmi_memlat: protocol@80 {
> + reg = <0x80>;
> + qcom,bus-type = <0x2>;
> +
> + cpu-map {
> + cluster0 {
> + qcom,freq-domain = <&cpufreq_hw 0>;
> + operating-points-v2 = <&cpu0_opp_table>;
> + };
> +
> + cluster1 {
> + qcom,freq-domain = <&cpufreq_hw 1>;
> + operating-points-v2 = <&cpu4_opp_table>;
> + };
> +
> + cluster2 {
> + qcom,freq-domain = <&cpufreq_hw 2>;
> + operating-points-v2 = <&cpu7_opp_table>;
> + };
> + };
> + };
> + };
> +
> + cpu0_opp_table: opp-table-cpu0 {
> + compatible = "operating-points-v2";
> +
> + cpu0_opp_300mhz: opp-300000000 {
> + opp-hz = /bits/ 64 <300000000>;
> + opp-peak-kBps = <9600000>;
> + };
> +
> + cpu0_opp_1325mhz: opp-1324800000 {
> + opp-hz = /bits/ 64 <1324800000>;
> + opp-peak-kBps = <33792000>;
> + };
> +
> + cpu0_opp_2016mhz: opp-2016000000 {
> + opp-hz = /bits/ 64 <2016000000>;
> + opp-peak-kBps = <48537600>;
> + };
> + };
> +
> + cpu4_opp_table: opp-table-cpu4 {
> + compatible = "operating-points-v2";
> +
> + cpu4_opp_691mhz: opp-691200000 {
> + opp-hz = /bits/ 64 <691200000>;
> + opp-peak-kBps = <9600000>;
> + };
> +
> + cpu4_opp_941mhz: opp-940800000 {
> + opp-hz = /bits/ 64 <940800000>;
> + opp-peak-kBps = <17817600>;
> + };
> +
> + cpu4_opp_2611mhz: opp-2611200000 {
> + opp-hz = /bits/ 64 <2611200000>;
> + opp-peak-kBps = <48537600>;
> + };
> + };
> +
> + cpu7_opp_table: opp-table-cpu7 {
> + compatible = "operating-points-v2";
> +
> + cpu7_opp_806mhz: opp-806400000 {
> + opp-hz = /bits/ 64 <806400000>;
> + opp-peak-kBps = <9600000>;
> + };
> +
> + cpu7_opp_2381mhz: opp-2380800000 {
> + opp-hz = /bits/ 64 <2380800000>;
> + opp-peak-kBps = <44851200>;
> + };
> +
> + cpu7_opp_2515mhz: opp-2515200000 {
> + opp-hz = /bits/ 64 <2515200000>;
> + opp-peak-kBps = <48537600>;
> + };
> + };
> + };
> +
> +
> + soc {
> + #address-cells = <2>;
> + #size-cells = <2>;
> +
> + cpucp_mbox: mailbox@...00000 {
> + compatible = "qcom,cpucp-mbox";
> + reg = <0x0 0x17c00000 0x0 0x10>, <0x0 0x18590300 0x0 0x700>;
> + interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>;
> + #mbox-cells = <0>;
> + };
> +
> + sram@...09400 {
> + compatible = "mmio-sram";
> + reg = <0x0 0x18509400 0x0 0x400>;
> + no-memory-wc;
> +
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges = <0x0 0x0 0x18509400 0x400>;
> +
> + cpu_scp_lpri: scp-sram-section@0 {
> + compatible = "arm,scmi-shmem";
> + reg = <0x0 0x80>;
> + };
> + };
> + };
> +
> ...
> --
> 2.7.4
>
>
Powered by blists - more mailing lists