lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Sep 2023 11:12:09 +0100
From:   Ionela Voinescu <ionela.voinescu@....com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     linux@...linux.org.uk, catalin.marinas@....com, will@...nel.org,
        paul.walmsley@...ive.com, palmer@...belt.com,
        aou@...s.berkeley.edu, sudeep.holla@....com,
        gregkh@...uxfoundation.org, rafael@...nel.org, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
        viresh.kumar@...aro.org, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org,
        linux-pm@...r.kernel.org, conor.dooley@...rochip.com,
        suagrfillet@...il.com, ajones@...tanamicro.com, lftan@...nel.org
Subject: Re: [PATCH 0/4] consolidate and cleanup CPU capacity

On Friday 01 Sep 2023 at 15:03:08 (+0200), Vincent Guittot wrote:
> This is the 1st part of consolidating how the max compute capacity is
> used in the scheduler and how we calculate the frequency for a level of
> utilization.
> 
> Fix some unconsistancy when computing frequency for an utilization. There
> can be a mismatch between energy model and schedutil.

There are a few more pieces of functionality that would be worth
consolidating in this set as well, if you'd like to consider them:

- arch_set_freq_scale() still uses policy->cpuinfo.max_freq. It might be
  good to use the boot time stored max_freq here as well. Given that
  arch_scale_cpu_capacity() would be based on that stored value, if
  arch_scale_freq_capacity() ends up using a different value, it could
  have interesting effects on the utilization signals in case of
  boosting.

- As Pierre mentioned in a previous comment, there is already a
  cpufreq_get_hw_max_freq() weak function that returns
  policy->cpuinfo.max_freq and it's only used at boot time by 
  the setup code for AMU use for frequency invariance. I'm tempted to
  suggest to use this to initialize what is now "freq_factor" as my
  intention when I created that function was to provide platform
  providers with the possibility to implement their own and decide on
  the frequency they choose as their maximum. This could have been an
  arch_ function as well, but as you mentioned before, mobile and server
  platforms might want to choose different maximum values even if they
  are using the same architecture.

Thanks,
Ionela.

> 
> Next step will be to make a difference between the original
> max compute capacity of a CPU and what is currently available when
> there is a capping applying forever (i.e. seconds or more).
> 
> Vincent Guittot (4):
>   sched: consolidate and cleanup access to CPU's max compute capacity
>   topology: add a new arch_scale_freq_reference
>   cpufreq/schedutil: use a fixed reference frequency
>   energy_model: use a fixed reference frequency
> 
>  arch/arm/include/asm/topology.h   |  1 +
>  arch/arm64/include/asm/topology.h |  1 +
>  arch/riscv/include/asm/topology.h |  1 +
>  drivers/base/arch_topology.c      |  9 +++------
>  include/linux/arch_topology.h     |  7 +++++++
>  include/linux/energy_model.h      | 20 +++++++++++++++++---
>  kernel/sched/core.c               |  2 +-
>  kernel/sched/cpudeadline.c        |  2 +-
>  kernel/sched/cpufreq_schedutil.c  | 29 +++++++++++++++++++++++++++--
>  kernel/sched/deadline.c           |  4 ++--
>  kernel/sched/fair.c               | 18 ++++++++----------
>  kernel/sched/rt.c                 |  2 +-
>  kernel/sched/sched.h              |  6 ------
>  kernel/sched/topology.c           |  7 +++++--
>  14 files changed, 75 insertions(+), 34 deletions(-)
> 
> -- 
> 2.34.1
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ