lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 18 Apr 2022 21:20:25 +0800
From:   Chengming Zhou <zhouchengming@...edance.com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Benjamin Segall <bsegall@...gle.com>, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com,
        dietmar.eggemann@....com, rostedt@...dmis.org, mgorman@...e.de,
        bristot@...hat.com, linux-kernel@...r.kernel.org,
        duanxiongchun@...edance.com, songmuchun@...edance.com,
        zhengqi.arch@...edance.com
Subject: Re: [External] Re: [PATCH] sched/fair: update tg->load_avg and
 se->load in throttle_cfs_rq()

On 2022/4/15 15:51, Vincent Guittot wrote:
> On Fri, 15 Apr 2022 at 07:42, Chengming Zhou
> <zhouchengming@...edance.com> wrote:
>>
>> On 2022/4/14 01:30, Benjamin Segall wrote:
>>> Chengming Zhou <zhouchengming@...edance.com> writes:
>>>
>>>> We use update_load_avg(cfs_rq, se, 0) in throttle_cfs_rq(), so the
>>>> cfs_rq->tg_load_avg_contrib and task_group->load_avg won't be updated
>>>> even when the cfs_rq's load_avg has changed.
>>>>
>>>> And we also don't call update_cfs_group(se), so the se->load won't
>>>> be updated too.
>>>>
>>>> Change to use update_load_avg(cfs_rq, se, UPDATE_TG) and add
>>>> update_cfs_group(se) in throttle_cfs_rq(), like we do in
>>>> dequeue_task_fair().
>>>
>>> Hmm, this does look more correct; Vincent, was having this not do
>>> UPDATE_TG deliberate, or an accident that we all missed when checking?
> 
> The cost of UPDATE_TG/update_tg_load_avg() is not free and the parent
> cfs->load_avg should not change because of the throttling but only the
> cfs->weight so I don't see a real benefit of UPDATE_TG.

Hi Vincent,

If the current task has dequeued before throttle_cfs_rq() when pick_next_task_fair,
the parent cfs_rq will wait to update_tg_load_avg() until the throttle_cfs_rq()
when enqueue_entity(), cause delay update of parent cfs_rq->load_avg and the
load.weight of that group_se, so the fairness of task_groups may be delayed.

update_tg_load_avg() won't touch tg->load_avg if (delta <= cfs_rq->tg_load_avg_contrib / 64).
So the cost may have been avoided if the load_avg is really unchanged ?

> 
> Chengming,
> have you faced an issue or this change is based on code review ?

Yes, this change is based on code review and git log history.

Thanks.

> 
>>>
>>> It looks like the unthrottle_cfs_rq side got UPDATE_TG added later in
>>> the two-loops pass, but not the throttle_cfs_rq side.
>>
>> Yes, UPDATE_TG was added in unthrottle_cfs_rq() in commit 39f23ce07b93
>> ("sched/fair: Fix unthrottle_cfs_rq() for leaf_cfs_rq list").
>>
>>>
>>> Also unthrottle_cfs_rq I'm guessing could still use update_cfs_group(se)
>>
>> It looks like we should also add update_cfs_group(se) in unthrottle_cfs_rq().
>>
>> Thanks.
>>
>>>
>>>
>>>>
>>>> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
>>>> ---
>>>>  kernel/sched/fair.c | 3 ++-
>>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index d4bd299d67ab..b37dc1db7be7 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -4936,8 +4936,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
>>>>              if (!se->on_rq)
>>>>                      goto done;
>>>>
>>>> -            update_load_avg(qcfs_rq, se, 0);
>>>> +            update_load_avg(qcfs_rq, se, UPDATE_TG);
>>>>              se_update_runnable(se);
>>>> +            update_cfs_group(se);
>>>>
>>>>              if (cfs_rq_is_idle(group_cfs_rq(se)))
>>>>                      idle_task_delta = cfs_rq->h_nr_running;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ