lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5DBB0EB0.9050106@linaro.org>
Date:   Thu, 31 Oct 2019 12:41:20 -0400
From:   Thara Gopinath <thara.gopinath@...aro.org>
To:     Ionela Voinescu <ionela.voinescu@....com>
Cc:     mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org,
        rui.zhang@...el.com, edubezval@...il.com, qperret@...gle.com,
        linux-kernel@...r.kernel.org, amit.kachhap@...il.com,
        javi.merino@...nel.org, daniel.lezcano@...aro.org
Subject: Re: [Patch v4 0/6] Introduce Thermal Pressure

On 10/31/2019 05:44 AM, Ionela Voinescu wrote:
> Hi Thara,
> 
> On Tuesday 22 Oct 2019 at 16:34:19 (-0400), Thara Gopinath wrote:
>> Thermal governors can respond to an overheat event of a cpu by
>> capping the cpu's maximum possible frequency. This in turn
>> means that the maximum available compute capacity of the
>> cpu is restricted. But today in the kernel, task scheduler is 
>> not notified of capping of maximum frequency of a cpu.
>> In other words, scheduler is unware of maximum capacity
> 
> Nit: s/unware/unaware
> 
>> restrictions placed on a cpu due to thermal activity.
>> This patch series attempts to address this issue.
>> The benefits identified are better task placement among available
>> cpus in event of overheating which in turn leads to better
>> performance numbers.
>>
>> The reduction in the maximum possible capacity of a cpu due to a 
>> thermal event can be considered as thermal pressure. Instantaneous
>> thermal pressure is hard to record and can sometime be erroneous
>> as there can be mismatch between the actual capping of capacity
>> and scheduler recording it. Thus solution is to have a weighted
>> average per cpu value for thermal pressure over time.
>> The weight reflects the amount of time the cpu has spent at a
>> capped maximum frequency. Since thermal pressure is recorded as
>> an average, it must be decayed periodically. Exisiting algorithm
>> in the kernel scheduler pelt framework is re-used to calculate
>> the weighted average. This patch series also defines a sysctl
>> inerface to allow for a configurable decay period.
>>
>> Regarding testing, basic build, boot and sanity testing have been
>> performed on db845c platform with debian file system.
>> Further, dhrystone and hackbench tests have been
>> run with the thermal pressure algorithm. During testing, due to
>> constraints of step wise governor in dealing with big little systems,
>> trip point 0 temperature was made assymetric between cpus in little
>> cluster and big cluster; the idea being that
>> big core will heat up and cpu cooling device will throttle the
>> frequency of the big cores faster, there by limiting the maximum available
>> capacity and the scheduler will spread out tasks to little cores as well.
>>
> 
> Can you please share the changes you've made to sdm845.dtsi and a kernel
> base on top of which to apply your patches? I would like to reproduce
> your results and run more tests and it would be good if our setups were
> as close as possible.
Hi Ionela
Thank you for the review.
So I tested this on 5.4-rc1 kernel. The dtsi changes is to reduce the
thermal trip points for the big CPUs to 60000 or 70000 from the default
90000. I did this for 2 reasons
1. I could never get the db845 to heat up sufficiently for my test cases
with the default trip.
2. I was using the default step-wise governor for thermal. I did not
want little and big to start throttling by the same % because then the
task placement ratio will remain the same between little and big cores.


> 
>> Test Results
>>
>> Hackbench: 1 group , 30000 loops, 10 runs       
>>                                                Result         SD             
>>                                                (Secs)     (% of mean)     
>>  No Thermal Pressure                            14.03       2.69%           
>>  Thermal Pressure PELT Algo. Decay : 32 ms      13.29       0.56%         
>>  Thermal Pressure PELT Algo. Decay : 64 ms      12.57       1.56%           
>>  Thermal Pressure PELT Algo. Decay : 128 ms     12.71       1.04%         
>>  Thermal Pressure PELT Algo. Decay : 256 ms     12.29       1.42%           
>>  Thermal Pressure PELT Algo. Decay : 512 ms     12.42       1.15%  
>>
>> Dhrystone Run Time  : 20 threads, 3000 MLOOPS
>>                                                  Result      SD             
>>                                                  (Secs)    (% of mean)     
>>  No Thermal Pressure                              9.452      4.49%
>>  Thermal Pressure PELT Algo. Decay : 32 ms        8.793      5.30%
>>  Thermal Pressure PELT Algo. Decay : 64 ms        8.981      5.29%
>>  Thermal Pressure PELT Algo. Decay : 128 ms       8.647      6.62%
>>  Thermal Pressure PELT Algo. Decay : 256 ms       8.774      6.45%
>>  Thermal Pressure PELT Algo. Decay : 512 ms       8.603      5.41%  
>>
> 
> Do you happen to know by how much the CPUs were capped during these
> experiments?

I don't have any captured results here. I know that big cores were
capped and at times there was capacity inversion.

Also I will fix the nit comments above.

> 
> Thanks,
> Ionela.
> 



-- 
Warm Regards
Thara

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ