lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <da0b9016-379b-4e4f-9741-5f69189661b9@arm.com>
Date: Mon, 10 Jun 2024 16:29:08 +0100
From: Hongyan Xia <hongyan.xia2@....com>
To: Dietmar Eggemann <dietmar.eggemann@....com>,
 Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
 Vincent Guittot <vincent.guittot@...aro.org>,
 Juri Lelli <juri.lelli@...hat.com>, Steven Rostedt <rostedt@...dmis.org>,
 Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
 Daniel Bristot de Oliveira <bristot@...hat.com>,
 Valentin Schneider <vschneid@...hat.com>
Cc: Qais Yousef <qyousef@...alina.io>,
 Morten Rasmussen <morten.rasmussen@....com>,
 Lukasz Luba <lukasz.luba@....com>,
 Christian Loehle <christian.loehle@....com>, pierre.gondois@....com,
 linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v3 2/6] sched/uclamp: Track a new util_avg_bias signal

On 28/05/2024 12:09, Hongyan Xia wrote:
> On 26/05/2024 23:52, Dietmar Eggemann wrote:
>[...]
>>> +    old = READ_ONCE(p->se.avg.util_avg_bias);
>>> +    new = (int)clamp(util, uclamp_min, uclamp_max) - (int)util;
>>> +
>>> +    WRITE_ONCE(p->se.avg.util_avg_bias, new);
>>> +    if (!p->se.on_rq)
>>> +        return;
>>> +    WRITE_ONCE(avg->util_avg_bias, READ_ONCE(avg->util_avg_bias) + 
>>> new - old);
>>> +}
>>> +#else /* !CONFIG_UCLAMP_TASK */
>>> +static void update_util_bias(struct sched_avg *avg, struct 
>>> task_struct *p)
>>> +{
>>> +}
>>> +#endif
>>> +
>>>   /*
>>>    * sched_entity:
>>>    *
>>> @@ -296,6 +330,8 @@ int __update_load_avg_blocked_se(u64 now, struct 
>>> sched_entity *se)
>>>   {
>>>       if (___update_load_sum(now, &se->avg, 0, 0, 0)) {
>>>           ___update_load_avg(&se->avg, se_weight(se));
>>> +        if (entity_is_task(se))
>>> +            update_util_bias(NULL, task_of(se));
>>
>> IMHO,
>>
>> update_util_bias(struct sched_avg *avg, struct sched_entity *se)
>>
>>      if (!entity_is_task(se))
>>          return;
>>
>>      ...
>>
>> would be easier to read.
> 
> Yeah, that would work, and might be a good way to prepare for group 
> clamping if it ever becomes a thing.
> 

Sadly it's not as easy as I hoped. The problem is that we need to fetch 
task uclamp values here so we need to get p anyway. Also, even if one 
day we implement group uclamp, we need to fetch the cfs_rq this se is on 
instead of the whole rq, so the function signature needs to change 
anyway. Keeping it the current way might be the better thing to do here.

> [...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ