lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 31 Oct 2019 09:44:20 +0000
From:   Ionela Voinescu <ionela.voinescu@....com>
To:     Thara Gopinath <thara.gopinath@...aro.org>
Cc:     mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org,
        rui.zhang@...el.com, edubezval@...il.com, qperret@...gle.com,
        linux-kernel@...r.kernel.org, amit.kachhap@...il.com,
        javi.merino@...nel.org, daniel.lezcano@...aro.org
Subject: Re: [Patch v4 0/6] Introduce Thermal Pressure

Hi Thara,

On Tuesday 22 Oct 2019 at 16:34:19 (-0400), Thara Gopinath wrote:
> Thermal governors can respond to an overheat event of a cpu by
> capping the cpu's maximum possible frequency. This in turn
> means that the maximum available compute capacity of the
> cpu is restricted. But today in the kernel, task scheduler is 
> not notified of capping of maximum frequency of a cpu.
> In other words, scheduler is unware of maximum capacity

Nit: s/unware/unaware

> restrictions placed on a cpu due to thermal activity.
> This patch series attempts to address this issue.
> The benefits identified are better task placement among available
> cpus in event of overheating which in turn leads to better
> performance numbers.
> 
> The reduction in the maximum possible capacity of a cpu due to a 
> thermal event can be considered as thermal pressure. Instantaneous
> thermal pressure is hard to record and can sometime be erroneous
> as there can be mismatch between the actual capping of capacity
> and scheduler recording it. Thus solution is to have a weighted
> average per cpu value for thermal pressure over time.
> The weight reflects the amount of time the cpu has spent at a
> capped maximum frequency. Since thermal pressure is recorded as
> an average, it must be decayed periodically. Exisiting algorithm
> in the kernel scheduler pelt framework is re-used to calculate
> the weighted average. This patch series also defines a sysctl
> inerface to allow for a configurable decay period.
> 
> Regarding testing, basic build, boot and sanity testing have been
> performed on db845c platform with debian file system.
> Further, dhrystone and hackbench tests have been
> run with the thermal pressure algorithm. During testing, due to
> constraints of step wise governor in dealing with big little systems,
> trip point 0 temperature was made assymetric between cpus in little
> cluster and big cluster; the idea being that
> big core will heat up and cpu cooling device will throttle the
> frequency of the big cores faster, there by limiting the maximum available
> capacity and the scheduler will spread out tasks to little cores as well.
>

Can you please share the changes you've made to sdm845.dtsi and a kernel
base on top of which to apply your patches? I would like to reproduce
your results and run more tests and it would be good if our setups were
as close as possible.

> Test Results
> 
> Hackbench: 1 group , 30000 loops, 10 runs       
>                                                Result         SD             
>                                                (Secs)     (% of mean)     
>  No Thermal Pressure                            14.03       2.69%           
>  Thermal Pressure PELT Algo. Decay : 32 ms      13.29       0.56%         
>  Thermal Pressure PELT Algo. Decay : 64 ms      12.57       1.56%           
>  Thermal Pressure PELT Algo. Decay : 128 ms     12.71       1.04%         
>  Thermal Pressure PELT Algo. Decay : 256 ms     12.29       1.42%           
>  Thermal Pressure PELT Algo. Decay : 512 ms     12.42       1.15%  
> 
> Dhrystone Run Time  : 20 threads, 3000 MLOOPS
>                                                  Result      SD             
>                                                  (Secs)    (% of mean)     
>  No Thermal Pressure                              9.452      4.49%
>  Thermal Pressure PELT Algo. Decay : 32 ms        8.793      5.30%
>  Thermal Pressure PELT Algo. Decay : 64 ms        8.981      5.29%
>  Thermal Pressure PELT Algo. Decay : 128 ms       8.647      6.62%
>  Thermal Pressure PELT Algo. Decay : 256 ms       8.774      6.45%
>  Thermal Pressure PELT Algo. Decay : 512 ms       8.603      5.41%  
>

Do you happen to know by how much the CPUs were capped during these
experiments?

Thanks,
Ionela.

> A Brief History
> 
> The first version of this patch-series was posted with resuing
> PELT algorithm to decay thermal pressure signal. The discussions
> that followed were around whether intanteneous thermal pressure
> solution is better and whether a stand-alone algortihm to accumulate
> and decay thermal pressure is more appropriate than re-using the
> PELT framework. 
> Tests on Hikey960 showed the stand-alone algorithm performing slightly
> better than resuing PELT algorithm and V2 was posted with the stand
> alone algorithm. Test results were shared as part of this series.
> Discussions were around re-using PELT algorithm and running
> further tests with more granular decay period.
> 
> For some time after this development was impeded due to hardware
> unavailability, some other unforseen and possibly unfortunate events.
> For this version, h/w was switched from hikey960 to db845c.
> Also Instantaneous thermal pressure was never tested as part of this
> cycle as it is clear that weighted average is a better implementation.
> The non-PELT algorithm never gave any conclusive results to prove that it
> is better than reusing PELT algorithm, in this round of testing.
> Also reusing PELT algorithm means thermal pressure tracks the
> other utilization signals in the scheduler.
> 
> v3->v4:
> 	- "Patch 3/7:sched: Initialize per cpu thermal pressure structure"
> 	   is dropped as it is no longer needed following changes in other
> 	   other patches.
> 	- rest of the change log mentioned in specific patches.
> 
> Thara Gopinath (6):
>   sched/pelt.c: Add support to track thermal pressure
>   sched: Add infrastructure to store and update instantaneous thermal
>     pressure
>   sched/fair: Enable CFS periodic tick to update thermal pressure
>   sched/fair: update cpu_capcity to reflect thermal pressure
>   thermal/cpu-cooling: Update thermal pressure in case of a maximum
>     frequency capping
>   sched: thermal: Enable tuning of decay period
> 
>  Documentation/admin-guide/kernel-parameters.txt |  5 ++
>  drivers/thermal/cpu_cooling.c                   | 31 ++++++++++-
>  include/linux/sched.h                           |  8 +++
>  kernel/sched/Makefile                           |  2 +-
>  kernel/sched/fair.c                             |  6 +++
>  kernel/sched/pelt.c                             | 13 +++++
>  kernel/sched/pelt.h                             |  7 +++
>  kernel/sched/sched.h                            |  1 +
>  kernel/sched/thermal.c                          | 68 +++++++++++++++++++++++++
>  kernel/sched/thermal.h                          | 13 +++++
>  10 files changed, 151 insertions(+), 3 deletions(-)
>  create mode 100644 kernel/sched/thermal.c
>  create mode 100644 kernel/sched/thermal.h
> 
> -- 
> 2.1.4
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ