lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Jul 2022 21:48:20 +0800
From:   Chengming Zhou <zhouchengming@...edance.com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     mingo@...hat.com, peterz@...radead.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, vschneid@...hat.com,
        linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH v2 08/10] sched/fair: fix load tracking for
 new forked !fair task

On 2022/7/19 20:35, Vincent Guittot wrote:
> On Wed, 13 Jul 2022 at 06:05, Chengming Zhou
> <zhouchengming@...edance.com> wrote:
>>
>> New forked !fair task will set its sched_avg last_update_time to
>> the pelt_clock of cfs_rq, after a while in switched_to_fair():
>>
>> switched_to_fair
>>   attach_task_cfs_rq
>>     attach_entity_cfs_rq
>>       update_load_avg
>>         __update_load_avg_se(now, cfs_rq, se)
>>
>> the delta (now - sa->last_update_time) will contribute/decay sched_avg
>> depends on the task running/runnable status at that time.
>>
>> This patch don't set sched_avg last_update_time of new forked !fair
>> task, leave it to 0. So later in update_load_avg(), we don't need to
>> contribute/decay the wrong delta (now - sa->last_update_time).
> 
> As mentioned in patch 7, I think it's wrong to not decay the init
> value of !fair task because they become obsolete if not used quickly
> (so we are decaying them)
> 
> It would be better to not set them at all in the case of !fair task

So just leave !fair task sched_avg to 0 and last_update_time == 0, right?


Thanks.

> 
>>
>> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
>> ---
>>  kernel/sched/fair.c | 18 ++----------------
>>  1 file changed, 2 insertions(+), 16 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 51fc20c161a3..50f65a2ede32 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -849,22 +849,8 @@ void post_init_entity_util_avg(struct task_struct *p)
>>
>>         sa->runnable_avg = sa->util_avg;
>>
>> -       if (p->sched_class != &fair_sched_class) {
>> -               /*
>> -                * For !fair tasks do:
>> -                *
>> -               update_cfs_rq_load_avg(now, cfs_rq);
>> -               attach_entity_load_avg(cfs_rq, se);
>> -               switched_from_fair(rq, p);
>> -                *
>> -                * such that the next switched_to_fair() has the
>> -                * expected state.
>> -                */
>> -               se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq);
>> -               return;
>> -       }
>> -
>> -       attach_entity_cfs_rq(se);
>> +       if (p->sched_class == &fair_sched_class)
>> +               attach_entity_cfs_rq(se);
>>  }
>>
>>  #else /* !CONFIG_SMP */
>> --
>> 2.36.1
>>

Powered by blists - more mailing lists