lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Jan 2021 15:05:59 -0800
From:   Bjorn Andersson <bjorn.andersson@...aro.org>
To:     Danny Lin <danny@...ag0n.dev>, ulf.hansson@...aro.org
Cc:     Andy Gross <agross@...nel.org>, Rob Herring <robh+dt@...nel.org>,
        linux-arm-msm@...r.kernel.org, devicetree@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: dts: qcom: sm8150: Add support for deep CPU
 cluster idle

On Tue 05 Jan 12:10 PST 2021, Danny Lin wrote:

> This commit adds support for deep idling of the entire unified DynamIQ
> CPU cluster on sm8150. In this idle state, the LLCC (Last-Level Cache
> Controller) is powered off and the AOP (Always-On Processor) enters a
> low-power sleep state.
> 
> I'm not sure what the per-CPU 0x400000f4 idle state previously
> contributed by Qualcomm as the "cluster sleep" state is, but the
> downstream kernel has no such state. The real deep cluster idle state
> is 0x41000c244, composed of:
> 
>     Cluster idle state: (0xc24) << 4 = 0xc240
>     Is reset state: 1 << 30 = 0x40000000
>     Affinity level: 1 << 24 = 0x1000000
>     CPU idle state: 0x4 (power collapse)
> 
> This setup can be replicated with the PSCI power domain cpuidle driver,
> which utilizes OSI to enter cluster idle when the last active CPU
> enters idle.
> 
> The cluster idle state cannot be used as a plain cpuidle state because
> it requires that all CPUs in the cluster are idling.
> 

This looks quite reasonable to me.

@Ulf, this seems to be the first attempt at wiring up the domain idle
pieces upstream, would you mind having a look?

The SM8150 pretty much identical to RB3 in this regard.

Regards,
Bjorn

> Signed-off-by: Danny Lin <danny@...ag0n.dev>
> ---
>  arch/arm64/boot/dts/qcom/sm8150.dtsi | 91 ++++++++++++++++++++++------
>  1 file changed, 73 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
> index 309e00b6fa44..8956c6986744 100644
> --- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
> +++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
> @@ -52,10 +52,10 @@ CPU0: cpu@0 {
>  			enable-method = "psci";
>  			capacity-dmips-mhz = <488>;
>  			dynamic-power-coefficient = <232>;
> -			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
> -					   &CLUSTER_SLEEP_0>;
>  			next-level-cache = <&L2_0>;
>  			qcom,freq-domain = <&cpufreq_hw 0>;
> +			power-domains = <&CPU_PD0>;
> +			power-domain-names = "psci";
>  			#cooling-cells = <2>;
>  			L2_0: l2-cache {
>  				compatible = "cache";
> @@ -73,10 +73,10 @@ CPU1: cpu@100 {
>  			enable-method = "psci";
>  			capacity-dmips-mhz = <488>;
>  			dynamic-power-coefficient = <232>;
> -			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
> -					   &CLUSTER_SLEEP_0>;
>  			next-level-cache = <&L2_100>;
>  			qcom,freq-domain = <&cpufreq_hw 0>;
> +			power-domains = <&CPU_PD1>;
> +			power-domain-names = "psci";
>  			#cooling-cells = <2>;
>  			L2_100: l2-cache {
>  				compatible = "cache";
> @@ -92,10 +92,10 @@ CPU2: cpu@200 {
>  			enable-method = "psci";
>  			capacity-dmips-mhz = <488>;
>  			dynamic-power-coefficient = <232>;
> -			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
> -					   &CLUSTER_SLEEP_0>;
>  			next-level-cache = <&L2_200>;
>  			qcom,freq-domain = <&cpufreq_hw 0>;
> +			power-domains = <&CPU_PD2>;
> +			power-domain-names = "psci";
>  			#cooling-cells = <2>;
>  			L2_200: l2-cache {
>  				compatible = "cache";
> @@ -110,10 +110,10 @@ CPU3: cpu@300 {
>  			enable-method = "psci";
>  			capacity-dmips-mhz = <488>;
>  			dynamic-power-coefficient = <232>;
> -			cpu-idle-states = <&LITTLE_CPU_SLEEP_0
> -					   &CLUSTER_SLEEP_0>;
>  			next-level-cache = <&L2_300>;
>  			qcom,freq-domain = <&cpufreq_hw 0>;
> +			power-domains = <&CPU_PD3>;
> +			power-domain-names = "psci";
>  			#cooling-cells = <2>;
>  			L2_300: l2-cache {
>  				compatible = "cache";
> @@ -128,10 +128,10 @@ CPU4: cpu@400 {
>  			enable-method = "psci";
>  			capacity-dmips-mhz = <1024>;
>  			dynamic-power-coefficient = <369>;
> -			cpu-idle-states = <&BIG_CPU_SLEEP_0
> -					   &CLUSTER_SLEEP_0>;
>  			next-level-cache = <&L2_400>;
>  			qcom,freq-domain = <&cpufreq_hw 1>;
> +			power-domains = <&CPU_PD4>;
> +			power-domain-names = "psci";
>  			#cooling-cells = <2>;
>  			L2_400: l2-cache {
>  				compatible = "cache";
> @@ -146,10 +146,10 @@ CPU5: cpu@500 {
>  			enable-method = "psci";
>  			capacity-dmips-mhz = <1024>;
>  			dynamic-power-coefficient = <369>;
> -			cpu-idle-states = <&BIG_CPU_SLEEP_0
> -					   &CLUSTER_SLEEP_0>;
>  			next-level-cache = <&L2_500>;
>  			qcom,freq-domain = <&cpufreq_hw 1>;
> +			power-domains = <&CPU_PD5>;
> +			power-domain-names = "psci";
>  			#cooling-cells = <2>;
>  			L2_500: l2-cache {
>  				compatible = "cache";
> @@ -164,10 +164,10 @@ CPU6: cpu@600 {
>  			enable-method = "psci";
>  			capacity-dmips-mhz = <1024>;
>  			dynamic-power-coefficient = <369>;
> -			cpu-idle-states = <&BIG_CPU_SLEEP_0
> -					   &CLUSTER_SLEEP_0>;
>  			next-level-cache = <&L2_600>;
>  			qcom,freq-domain = <&cpufreq_hw 1>;
> +			power-domains = <&CPU_PD6>;
> +			power-domain-names = "psci";
>  			#cooling-cells = <2>;
>  			L2_600: l2-cache {
>  				compatible = "cache";
> @@ -182,10 +182,10 @@ CPU7: cpu@700 {
>  			enable-method = "psci";
>  			capacity-dmips-mhz = <1024>;
>  			dynamic-power-coefficient = <421>;
> -			cpu-idle-states = <&BIG_CPU_SLEEP_0
> -					   &CLUSTER_SLEEP_0>;
>  			next-level-cache = <&L2_700>;
>  			qcom,freq-domain = <&cpufreq_hw 2>;
> +			power-domains = <&CPU_PD7>;
> +			power-domain-names = "psci";
>  			#cooling-cells = <2>;
>  			L2_700: l2-cache {
>  				compatible = "cache";
> @@ -251,11 +251,13 @@ BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
>  				min-residency-us = <4488>;
>  				local-timer-stop;
>  			};
> +		};
>  
> +		domain-idle-states {
>  			CLUSTER_SLEEP_0: cluster-sleep-0 {
> -				compatible = "arm,idle-state";
> +				compatible = "domain-idle-state";
>  				idle-state-name = "cluster-power-collapse";
> -				arm,psci-suspend-param = <0x400000F4>;
> +				arm,psci-suspend-param = <0x4100c244>;
>  				entry-latency-us = <3263>;
>  				exit-latency-us = <6562>;
>  				min-residency-us = <9987>;
> @@ -291,6 +293,59 @@ pmu {
>  	psci {
>  		compatible = "arm,psci-1.0";
>  		method = "smc";
> +
> +		CPU_PD0: cpu0 {
> +			#power-domain-cells = <0>;
> +			power-domains = <&CLUSTER_PD>;
> +			domain-idle-states = <&LITTLE_CPU_SLEEP_0>;
> +		};
> +
> +		CPU_PD1: cpu1 {
> +			#power-domain-cells = <0>;
> +			power-domains = <&CLUSTER_PD>;
> +			domain-idle-states = <&LITTLE_CPU_SLEEP_0>;
> +		};
> +
> +		CPU_PD2: cpu2 {
> +			#power-domain-cells = <0>;
> +			power-domains = <&CLUSTER_PD>;
> +			domain-idle-states = <&LITTLE_CPU_SLEEP_0>;
> +		};
> +
> +		CPU_PD3: cpu3 {
> +			#power-domain-cells = <0>;
> +			power-domains = <&CLUSTER_PD>;
> +			domain-idle-states = <&LITTLE_CPU_SLEEP_0>;
> +		};
> +
> +		CPU_PD4: cpu4 {
> +			#power-domain-cells = <0>;
> +			power-domains = <&CLUSTER_PD>;
> +			domain-idle-states = <&BIG_CPU_SLEEP_0>;
> +		};
> +
> +		CPU_PD5: cpu5 {
> +			#power-domain-cells = <0>;
> +			power-domains = <&CLUSTER_PD>;
> +			domain-idle-states = <&BIG_CPU_SLEEP_0>;
> +		};
> +
> +		CPU_PD6: cpu6 {
> +			#power-domain-cells = <0>;
> +			power-domains = <&CLUSTER_PD>;
> +			domain-idle-states = <&BIG_CPU_SLEEP_0>;
> +		};
> +
> +		CPU_PD7: cpu7 {
> +			#power-domain-cells = <0>;
> +			power-domains = <&CLUSTER_PD>;
> +			domain-idle-states = <&BIG_CPU_SLEEP_0>;
> +		};
> +
> +		CLUSTER_PD: cpu-cluster0 {
> +			#power-domain-cells = <0>;
> +			domain-idle-states = <&CLUSTER_SLEEP_0>;
> +		};
>  	};
>  
>  	reserved-memory {
> -- 
> 2.29.2
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ