lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4kovyp7655kwkznnem5e4mg2yjroc2x76vsyp6w4bm5n7tn5xy@rrz2ih2u4p2x>
Date: Thu, 24 Oct 2024 22:54:10 +0300
From: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
To: Sibi Sankar <quic_sibis@...cinc.com>
Cc: sudeep.holla@....com, cristian.marussi@....com, andersson@...nel.org, 
	konrad.dybcio@...aro.org, robh+dt@...nel.org, krzysztof.kozlowski+dt@...aro.org, 
	linux-kernel@...r.kernel.org, linux-arm-msm@...r.kernel.org, devicetree@...r.kernel.org, 
	linux-arm-kernel@...ts.infradead.org, quic_rgottimu@...cinc.com, quic_kshivnan@...cinc.com, 
	conor+dt@...nel.org, arm-scmi@...r.kernel.org
Subject: Re: [PATCH V4 1/5] dt-bindings: firmware: Document bindings for QCOM
 SCMI Generic Extension

On Tue, Oct 22, 2024 at 12:43:09PM +0530, Sibi Sankar wrote:
> 
> 
> On 10/7/24 23:36, Dmitry Baryshkov wrote:
> > On Mon, Oct 07, 2024 at 11:40:19AM GMT, Sibi Sankar wrote:
> > > Document the various memory buses that can be monitored and scaled by
> > > the memory latency governor hosted by the QCOM SCMI Generic Extension
> > > Protocol v1.0.
> > > 
> > > Signed-off-by: Sibi Sankar <quic_sibis@...cinc.com>
> 
> Hey Dmitry,
> 
> Thanks for taking time to review the series!
> 
> > > ---
> > > 
> > > v3:
> > > * Restructure the bindings to mimic IMX [Christian]
> > > 
> > >   .../bindings/firmware/arm,scmi.yaml           |   1 +
> > >   .../bindings/firmware/qcom,scmi-memlat.yaml   | 246 ++++++++++++++++++
> > >   .../dt-bindings/firmware/qcom,scmi-memlat.h   |  22 ++
> > >   3 files changed, 269 insertions(+)
> > >   create mode 100644 Documentation/devicetree/bindings/firmware/qcom,scmi-memlat.yaml
> > >   create mode 100644 include/dt-bindings/firmware/qcom,scmi-memlat.h
> > > 
> > > diff --git a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml
> > > index 54d7d11bfed4..1d405f429168 100644
> > > --- a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml
> > > +++ b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml
> > > @@ -24,6 +24,7 @@ description: |
> > >   anyOf:
> > >     - $ref: /schemas/firmware/nxp,imx95-scmi.yaml
> > > +  - $ref: /schemas/firmware/qcom,scmi-memlat.yaml
> > >   properties:
> > >     $nodename:
> > > diff --git a/Documentation/devicetree/bindings/firmware/qcom,scmi-memlat.yaml b/Documentation/devicetree/bindings/firmware/qcom,scmi-memlat.yaml
> > > new file mode 100644
> > > index 000000000000..0e8ea6dacd6a
> > > --- /dev/null
> > > +++ b/Documentation/devicetree/bindings/firmware/qcom,scmi-memlat.yaml
> > > @@ -0,0 +1,246 @@
> > > +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> > > +%YAML 1.2
> > > +---
> > > +$id: http://devicetree.org/schemas/firmware/qcom,scmi-memlat.yaml#
> > > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > > +
> > > +title: Qualcomm SCMI Memory Bus nodes
> > > +
> > > +maintainers:
> > > +  - Sibi Sankar <quic_sibis@...cinc.com>
> > > +
> > > +description:
> > > +  This binding describes the various memory buses that can be monitored and scaled
> > > +  by memory latency governor running on the CPU Control Processor (SCMI controller).
> > > +
> > > +properties:
> > > +  protocol@80:
> > > +    $ref: '/schemas/firmware/arm,scmi.yaml#/$defs/protocol-node'
> > > +    unevaluatedProperties: false
> > > +
> > > +    properties:
> > > +      reg:
> > > +        const: 0x80
> > > +
> > > +    patternProperties:
> > > +      '^memory-[0-9]$':
> > > +        type: object
> > > +        unevaluatedProperties: false
> > > +        description:
> > > +          The list of all memory buses that can be monitored and scaled by the
> > > +          memory latency governor running on the SCMI controller.
> > > +
> > > +        properties:
> > > +          qcom,memory-type:
> > > +            $ref: /schemas/types.yaml#/definitions/uint32
> > > +            enum: [0, 1, 2]
> > > +            description: |
> > > +              Memory Bus Identifier
> > > +              0 = QCOM_MEM_TYPE_DDR
> > > +              1 = QCOM_MEM_TYPE_LLCC
> > > +              2 = QCOM_MEM_TYPE_DDR_QOS
> > 
> > I'm sorry if this has been discussed and frowned upon, but can you
> > squash memory type into device node?
> 
> I don't think anyone had any strong opinions on how the
> nodes is to be named. We went with a generic node name since
> it could accomodate multiple instances of llcc or ddr in the
> future. We didn't want it be named ddr-0/ddr-1 and so on. So
> I'll continue to stick with the current naming unless you have
> a strong reason other than readability.

As I wrote in the other email, the memory types are not equal. They have
different properties, etc. Having non-generic names allows describing
that in schema.

Last, but not least, please consider how reserved memory nodes are
handled nowadays: they have non-generic names, each one describing the
purpose / kind.

> > protocol@80 {
> > 	ddr {
> > 	};
> > 
> > 	llcc {
> > 	};
> > 
> > 	ddr-qos {
> > 	};
> > };
> > 
> > > +
> > > +          freq-table-hz:
> > > +            items:
> > > +              items:
> > > +                - description: Minimum frequency of the memory bus in Hz
> > > +                - description: Maximum frequency of the memory bus in Hz
> > 
> > Does it make sense for the DDR-QOS type? Can we hardcode those values
> > and drop freq-table-hz from the DDR-QOS node?
> > 
> > Also, can we drop this completely by adding one extra OPP entry with the
> > minimum memory bus frequency?
> 
> the map table doesn't necessarily list all the supported
> frequencies. It was made that way so that the table is flexible
> enough that it doesn't have to be changed a lot across platforms.
> Hence a need for a separate property to list min/max freq.

Please use opp-supported-hw or other similar techniques to describe
supported frequencies.

> 
> > 
> > > +
> > > +        patternProperties:
> > > +          '^monitor-[0-9]$':
> > > +            type: object
> > > +            unevaluatedProperties: false
> > > +            description:
> > > +              The list of all monitors detecting the memory latency bound workloads using
> > > +              various counters.
> > > +
> > > +            properties:
> > > +              qcom,compute-type:
> > > +                description:
> > > +                  Monitors of type compute perform bus dvfs based on a rudimentary CPU
> > > +                  frequency to memory frequency map.
> > > +                type: boolean
> > 
> > This seems to be redundant. If there is no qcom,ipm-ceil property, then
> > it's qcom,compute-type, isn't it?
> 
> ack
> 
> > 
> > > +
> > > +              qcom,ipm-ceil:
> > > +                $ref: /schemas/types.yaml#/definitions/uint32
> > > +                description:
> > > +                  Monitors having this property perform bus dvfs based on the same
> > > +                  rudimentary table but the scaling is performed only if the calculated
> > > +                  IPM (Instruction Per Misses) exceeds the given ceiling.
> > > +
> > > +              cpus:
> > > +                $ref: /schemas/types.yaml#/definitions/phandle-array
> > > +                description:
> > > +                  Should be a list of phandles to CPU nodes (as described in
> > > +                  Documentation/devicetree/bindings/arm/cpus.yaml).
> > 
> > Which CPU nodes? I see that the examples list all CPUs here. Do we
> > really need them?
> 
> This observation is only valid for X1E where all the cpus have
> identical freq charecteristics. Even with this case we need to
> list them to handle cases where cpus gets disabled by the bootloader
> on lower cored X1E parts i.e. we use this to figure out the actual
> physical mask.

This should be a part of the description.

BTW, why do you need to remove bootloader-removed cores? Can you simply
ignore non-existing CPUs instead?

> 
> > 
> > > +
> > > +              operating-points-v2: true
> > > +              opp-table:
> > > +                type: object
> > > +
> > > +            required:
> > > +              - cpus
> > > +              - operating-points-v2
> > > +
> > > +            oneOf:
> > > +              - required: [ 'qcom,compute-type' ]
> > > +              - required: [ 'qcom,ipm-ceil' ]
> > > +
> > > +        required:
> > > +          - qcom,memory-type
> > > +          - freq-table-hz
> > > +
> > > +additionalProperties: true
> > > +
> > > +examples:
> > > +  - |
> > > +    #include <dt-bindings/firmware/qcom,scmi-memlat.h>
> > > +
> > > +    firmware {
> > > +        scmi {
> > > +            compatible = "arm,scmi";
> > > +            mboxes = <&cpucp_mbox 0>, <&cpucp_mbox 2>;
> > > +            mbox-names = "tx", "rx";
> > > +            shmem = <&cpu_scp_lpri0>, <&cpu_scp_lpri1>;
> > > +
> > > +            #address-cells = <1>;
> > > +            #size-cells = <0>;
> > > +
> > > +            protocol@80 {
> > > +                reg = <0x80>;
> > > +
> > > +                memory-0 {
> > > +                    qcom,memory-type = <QCOM_MEM_TYPE_DDR>;
> > > +                    freq-table-hz = /bits/ 64 <200000000 4224000000>;
> > > +
> > > +                    monitor-0 {
> > 
> > Hmm. Can we say that each memory type can have at most one IPM and one
> > compute aka "passive" memlat monitor? Does it make sense to use them as
> > node names and drop the extra monitor-M names?
> 
> Again this observation is valid only for X1E where the cpu freq
> across cpu's are identical across clusters and is not true for
> other mobile SoCs. So each memory can have more than 2 monitors
> i.e. atleast one active/passibe monitor for each cluster.

Description or commit message, please.

> 
> > 
> > > +                        qcom,ipm-ceil = <20000000>;
> > > +                        cpus = <&CPU0 &CPU1 &CPU2 &CPU3 &CPU4 &CPU5 &CPU6 &CPU7
> > > +                                &CPU8 &CPU9 &CPU10 &CPU11>;
> > 
> > Are CPU lists different between monitors? Can they be different? Can
> > they be different between different memory types?
> 
> same explanation.
> 
> > 
> > > +                        operating-points-v2 = <&memory0_monitor0_opp_table>;
> > > +
> > > +                        memory0_monitor0_opp_table: opp-table {
> > 
> > sensible names are better:
> 
> I think I just picked these names up from a cpufreq table upstream.

Doesn't mean that you can't be better than that :-D

> 
> > 
> > ddr_ipm_opp_table: opp-table {
> > };
> > 

-- 
With best wishes
Dmitry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ