lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4454b03d-9e1d-42a9-a8e6-177e3848a366@oss.qualcomm.com>
Date: Tue, 10 Feb 2026 10:13:44 +0530
From: Gaurav Kohli <gaurav.kohli@....qualcomm.com>
To: Manivannan Sadhasivam <mani@...nel.org>
Cc: andersson@...nel.org, konradybcio@...nel.org, robh@...nel.org,
        krzk+dt@...nel.org, conor+dt@...nel.org, linux-arm-msm@...r.kernel.org,
        devicetree@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [[PATCH]] arm64: dts: qcom: hamoa: Enable cpufreq cooling devices



On 2/3/2026 11:21 AM, Manivannan Sadhasivam wrote:
> On Wed, Jan 28, 2026 at 11:02:08AM +0530, Gaurav Kohli wrote:
>> Add cooling-cells property to the CPU nodes to support cpufreq
>> cooling devices.
>>
>> Signed-off-by: Gaurav Kohli <gaurav.kohli@....qualcomm.com>
> 
> FYI: I submitted the similar version back in October:
> https://lore.kernel.org/linux-arm-msm/20251015065703.9422-1-mani@kernel.org/
> 

Hi Mani,

thanks for sharing this link. Could you please respin your patch. So 
that it can get merged? we need this cpufreq support enabled.

> - Mani
> 
>> ---
>>   arch/arm64/boot/dts/qcom/hamoa.dtsi | 12 ++++++++++++
>>   1 file changed, 12 insertions(+)
>>
>> diff --git a/arch/arm64/boot/dts/qcom/hamoa.dtsi b/arch/arm64/boot/dts/qcom/hamoa.dtsi
>> index db65c392e618..799e405a9f87 100644
>> --- a/arch/arm64/boot/dts/qcom/hamoa.dtsi
>> +++ b/arch/arm64/boot/dts/qcom/hamoa.dtsi
>> @@ -75,6 +75,7 @@ cpu0: cpu@0 {
>>   			next-level-cache = <&l2_0>;
>>   			power-domains = <&cpu_pd0>, <&scmi_dvfs 0>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   
>>   			l2_0: l2-cache {
>>   				compatible = "cache";
>> @@ -91,6 +92,7 @@ cpu1: cpu@100 {
>>   			next-level-cache = <&l2_0>;
>>   			power-domains = <&cpu_pd1>, <&scmi_dvfs 0>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   		};
>>   
>>   		cpu2: cpu@200 {
>> @@ -101,6 +103,7 @@ cpu2: cpu@200 {
>>   			next-level-cache = <&l2_0>;
>>   			power-domains = <&cpu_pd2>, <&scmi_dvfs 0>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   		};
>>   
>>   		cpu3: cpu@300 {
>> @@ -111,6 +114,7 @@ cpu3: cpu@300 {
>>   			next-level-cache = <&l2_0>;
>>   			power-domains = <&cpu_pd3>, <&scmi_dvfs 0>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   		};
>>   
>>   		cpu4: cpu@...00 {
>> @@ -121,6 +125,7 @@ cpu4: cpu@...00 {
>>   			next-level-cache = <&l2_1>;
>>   			power-domains = <&cpu_pd4>, <&scmi_dvfs 1>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   
>>   			l2_1: l2-cache {
>>   				compatible = "cache";
>> @@ -137,6 +142,7 @@ cpu5: cpu@...00 {
>>   			next-level-cache = <&l2_1>;
>>   			power-domains = <&cpu_pd5>, <&scmi_dvfs 1>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   		};
>>   
>>   		cpu6: cpu@...00 {
>> @@ -147,6 +153,7 @@ cpu6: cpu@...00 {
>>   			next-level-cache = <&l2_1>;
>>   			power-domains = <&cpu_pd6>, <&scmi_dvfs 1>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   		};
>>   
>>   		cpu7: cpu@...00 {
>> @@ -157,6 +164,7 @@ cpu7: cpu@...00 {
>>   			next-level-cache = <&l2_1>;
>>   			power-domains = <&cpu_pd7>, <&scmi_dvfs 1>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   		};
>>   
>>   		cpu8: cpu@...00 {
>> @@ -167,6 +175,7 @@ cpu8: cpu@...00 {
>>   			next-level-cache = <&l2_2>;
>>   			power-domains = <&cpu_pd8>, <&scmi_dvfs 2>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   
>>   			l2_2: l2-cache {
>>   				compatible = "cache";
>> @@ -183,6 +192,7 @@ cpu9: cpu@...00 {
>>   			next-level-cache = <&l2_2>;
>>   			power-domains = <&cpu_pd9>, <&scmi_dvfs 2>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   		};
>>   
>>   		cpu10: cpu@...00 {
>> @@ -193,6 +203,7 @@ cpu10: cpu@...00 {
>>   			next-level-cache = <&l2_2>;
>>   			power-domains = <&cpu_pd10>, <&scmi_dvfs 2>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   		};
>>   
>>   		cpu11: cpu@...00 {
>> @@ -203,6 +214,7 @@ cpu11: cpu@...00 {
>>   			next-level-cache = <&l2_2>;
>>   			power-domains = <&cpu_pd11>, <&scmi_dvfs 2>;
>>   			power-domain-names = "psci", "perf";
>> +			#cooling-cells = <2>;
>>   		};
>>   
>>   		cpu-map {
>> -- 
>> 2.34.1
>>
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ