lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3bfb4e65-b746-449e-a9e7-acda24897045@arm.com>
Date: Fri, 21 Jun 2024 10:22:40 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Qais Yousef <qyousef@...alina.io>,
 Vincent Guittot <vincent.guittot@...aro.org>
Cc: Xuewen Yan <xuewen.yan94@...il.com>, Xuewen Yan <xuewen.yan@...soc.com>,
 mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
 rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
 bristot@...hat.com, vschneid@...hat.com, vincent.donnefort@....com,
 ke.wang@...soc.com, linux-kernel@...r.kernel.org, christian.loehle@....com
Subject: Re: [PATCH] sched/fair: Prevent cpu_busy_time from exceeding
 actual_cpu_capacity

On 20/06/2024 13:37, Qais Yousef wrote:
> On 06/20/24 09:45, Vincent Guittot wrote:
>> On Wed, 19 Jun 2024 at 20:10, Qais Yousef <qyousef@...alina.io> wrote:
>>>
>>> On 06/19/24 11:05, Xuewen Yan wrote:
>>>> On Tue, Jun 18, 2024 at 11:39 PM Qais Yousef <qyousef@...alina.io> wrote:
>>>>>
>>>>> On 06/18/24 17:23, Vincent Guittot wrote:
>>>>>> On Mon, 17 Jun 2024 at 12:53, Qais Yousef <qyousef@...alina.io> wrote:
>>>>>>>
>>>>>>> On 06/17/24 11:07, Vincent Guittot wrote:

[...]

>>>> diff --git a/drivers/thermal/cpufreq_cooling.c
>>>> b/drivers/thermal/cpufreq_cooling.c
>>>> index 280071be30b1..a8546d69cc10 100644
>>>> --- a/drivers/thermal/cpufreq_cooling.c
>>>> +++ b/drivers/thermal/cpufreq_cooling.c
>>>> @@ -164,7 +164,7 @@ static u32 get_load(struct cpufreq_cooling_device
>>>> *cpufreq_cdev, int cpu,
>>>>  {
>>>>         unsigned long util = sched_cpu_util(cpu);
>>>>
>>>> -       return (util * 100) / arch_scale_cpu_capacity(cpu);
>>>> +       return (util * 100) / get_actual_cpu_capacity(cpu);
>>>>  }
>>>>  #else /* !CONFIG_SMP */
>>>>  static u32 get_load(struct cpufreq_cooling_device *cpufreq_cdev, int cpu,
>>>>
>>>>
>>>> Because if still use arch_scale_cpu_capacity(), the load pct may be decreased,
>>>> It may affect the thermal-IPA-governor's power consideration.
>>>
>>> I am not sure about this one. But looks plausible. Vincent?
>>
>> I don't see why we should change them ? We don't want to change
>> sched_cpu_util() as well
>> AFAICT, the only outcome of this thread is that we should use
>> get_actual_cpu_capacity() instead of arch_scale_cpu_capacity() in
>> util_fits_cpu(). capping the utilization only make the estimation
>> worse
> 
> Yes my bad. Only util_fits_cpu() is needed now

Looks like that's for the uclamp part (2. part) of util_fits_cpu().

For the first part we use capacity_of() which bases on
get_actual_cpu_capacity() [scale_rt_capacity()] and changes each 4ms
[250 Hz].

Our assumption is that hw_load_avg() and cpufreq_get_pressure() change
way less frequent than that, right?

So we can use capacity_of() and get_actual_cpu_capacity() in the same
code path.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ