lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 Jan 2023 17:29:02 +0100
From:   Thierry Reding <thierry.reding@...il.com>
To:     Sumit Gupta <sumitg@...dia.com>
Cc:     treding@...dia.com, krzysztof.kozlowski@...aro.org,
        dmitry.osipenko@...labora.com, viresh.kumar@...aro.org,
        rafael@...nel.org, jonathanh@...dia.com, robh+dt@...nel.org,
        linux-kernel@...r.kernel.org, linux-tegra@...r.kernel.org,
        linux-pm@...r.kernel.org, devicetree@...r.kernel.org,
        sanjayc@...dia.com, ksitaraman@...dia.com, ishah@...dia.com,
        bbasu@...dia.com
Subject: Re: [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and
 interconnects property

On Tue, Dec 20, 2022 at 09:32:36PM +0530, Sumit Gupta wrote:
> Add OPP table and interconnects property required to scale DDR
> frequency for better performance. The OPP table has CPU frequency
> to per MC channel bandwidth mapping in each operating point entry.
> One table is added for each cluster even though the table data is
> same because the bandwidth request is per cluster. OPP framework
> is creating a single icc path if the table is marked 'opp-shared'
> and shared among all clusters. For us the OPP table is same but
> the MC client ID argument to interconnects property is different
> for each cluster which makes different icc path for all.
> 
> Signed-off-by: Sumit Gupta <sumitg@...dia.com>
> ---
>  arch/arm64/boot/dts/nvidia/tegra234.dtsi | 276 +++++++++++++++++++++++
>  1 file changed, 276 insertions(+)
> 
> diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
> index eaf05ee9acd1..ed7d0f7da431 100644
> --- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
> +++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
> @@ -2840,6 +2840,9 @@
>  
>  			enable-method = "psci";
>  
> +			operating-points-v2 = <&cl0_opp_tbl>;
> +			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;

I dislike how this muddies the water between hardware and software
description. We don't have a hardware client ID for the CPU clusters, so
there's no good way to describe this in a hardware-centric way. We used
to have MPCORE read and write clients for this, but as far as I know
they used to be for the entire CCPLEX rather than per-cluster. It'd be
interesting to know what the BPMP does underneath, perhaps that could
give some indication as to what would be a better hardware value to use
for this.

Failing that, I wonder if a combination of icc_node_create() and
icc_get() can be used for this type of "virtual node" special case.

Thierry

Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ