lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 14 Dec 2023 08:22:10 +0000
From:   Lukasz Luba <lukasz.luba@....com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     catalin.marinas@....com, will@...nel.org,
        linux-trace-kernel@...r.kernel.org, amit.kachhap@...il.com,
        daniel.lezcano@...aro.org, mhiramat@...nel.org,
        vschneid@...hat.com, bristot@...hat.com, mgorman@...e.de,
        bsegall@...gle.com, juri.lelli@...hat.com, peterz@...radead.org,
        mingo@...hat.com, linux-pm@...r.kernel.org,
        linux-kernel@...r.kernel.org, konrad.dybcio@...aro.org,
        andersson@...nel.org, agross@...nel.org, rui.zhang@...el.com,
        viresh.kumar@...aro.org, rafael@...nel.org, sudeep.holla@....com,
        dietmar.eggemann@....com, rostedt@...dmis.org,
        linux-arm-kernel@...ts.infradead.org, linux-arm-msm@...r.kernel.org
Subject: Re: [PATCH 0/5] Rework system pressure interface to the scheduler

Hi Vincent,

I've been waiting for this feature, thanks!


On 12/12/23 14:27, Vincent Guittot wrote:
> Following the consolidation and cleanup of CPU capacity in [1], this serie
> reworks how the scheduler gets the pressures on CPUs. We need to take into
> account all pressures applied by cpufreq on the compute capacity of a CPU
> for dozens of ms or more and not only cpufreq cooling device or HW
> mitigiations. we split the pressure applied on CPU's capacity in 2 parts:
> - one from cpufreq and freq_qos
> - one from HW high freq mitigiation.
> 
> The next step will be to add a dedicated interface for long standing
> capping of the CPU capacity (i.e. for seconds or more) like the
> scaling_max_freq of cpufreq sysfs. The latter is already taken into
> account by this serie but as a temporary pressure which is not always the
> best choice when we know that it will happen for seconds or more.
> 
> [1] https://lore.kernel.org/lkml/20231211104855.558096-1-vincent.guittot@linaro.org/
> 
> Vincent Guittot (4):
>    cpufreq: Add a cpufreq pressure feedback for the scheduler
>    sched: Take cpufreq feedback into account
>    thermal/cpufreq: Remove arch_update_thermal_pressure()
>    sched: Rename arch_update_thermal_pressure into
>      arch_update_hw_pressure
> 
>   arch/arm/include/asm/topology.h               |  6 +--
>   arch/arm64/include/asm/topology.h             |  6 +--
>   drivers/base/arch_topology.c                  | 26 ++++-----
>   drivers/cpufreq/cpufreq.c                     | 48 +++++++++++++++++
>   drivers/cpufreq/qcom-cpufreq-hw.c             |  4 +-
>   drivers/thermal/cpufreq_cooling.c             |  3 --
>   include/linux/arch_topology.h                 |  8 +--
>   include/linux/cpufreq.h                       | 10 ++++
>   include/linux/sched/topology.h                |  8 +--
>   .../{thermal_pressure.h => hw_pressure.h}     | 14 ++---
>   include/trace/events/sched.h                  |  2 +-
>   init/Kconfig                                  | 12 ++---
>   kernel/sched/core.c                           |  8 +--
>   kernel/sched/fair.c                           | 53 ++++++++++---------
>   kernel/sched/pelt.c                           | 18 +++----
>   kernel/sched/pelt.h                           | 16 +++---
>   kernel/sched/sched.h                          |  4 +-
>   17 files changed, 152 insertions(+), 94 deletions(-)
>   rename include/trace/events/{thermal_pressure.h => hw_pressure.h} (55%)
> 

I would like to test it, but something worries me. Why there is 0/5 in
this subject and only 4 patches?

Could you tell me your base branch that I can apply this, please?

Regards,
Lukasz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ