lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Apr 2022 13:42:06 +0800
From:   Chengming Zhou <zhouchengming@...edance.com>
To:     Benjamin Segall <bsegall@...gle.com>
Cc:     mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, mgorman@...e.de, bristot@...hat.com,
        linux-kernel@...r.kernel.org, duanxiongchun@...edance.com,
        songmuchun@...edance.com, zhengqi.arch@...edance.com
Subject: Re: [External] Re: [PATCH] sched/fair: update tg->load_avg and
 se->load in throttle_cfs_rq()

On 2022/4/14 01:30, Benjamin Segall wrote:
> Chengming Zhou <zhouchengming@...edance.com> writes:
> 
>> We use update_load_avg(cfs_rq, se, 0) in throttle_cfs_rq(), so the
>> cfs_rq->tg_load_avg_contrib and task_group->load_avg won't be updated
>> even when the cfs_rq's load_avg has changed.
>>
>> And we also don't call update_cfs_group(se), so the se->load won't
>> be updated too.
>>
>> Change to use update_load_avg(cfs_rq, se, UPDATE_TG) and add
>> update_cfs_group(se) in throttle_cfs_rq(), like we do in
>> dequeue_task_fair().
> 
> Hmm, this does look more correct; Vincent, was having this not do
> UPDATE_TG deliberate, or an accident that we all missed when checking?
> 
> It looks like the unthrottle_cfs_rq side got UPDATE_TG added later in
> the two-loops pass, but not the throttle_cfs_rq side.

Yes, UPDATE_TG was added in unthrottle_cfs_rq() in commit 39f23ce07b93
("sched/fair: Fix unthrottle_cfs_rq() for leaf_cfs_rq list").

> 
> Also unthrottle_cfs_rq I'm guessing could still use update_cfs_group(se)

It looks like we should also add update_cfs_group(se) in unthrottle_cfs_rq().

Thanks.

> 
> 
>>
>> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
>> ---
>>  kernel/sched/fair.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index d4bd299d67ab..b37dc1db7be7 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4936,8 +4936,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
>>  		if (!se->on_rq)
>>  			goto done;
>>  
>> -		update_load_avg(qcfs_rq, se, 0);
>> +		update_load_avg(qcfs_rq, se, UPDATE_TG);
>>  		se_update_runnable(se);
>> +		update_cfs_group(se);
>>  
>>  		if (cfs_rq_is_idle(group_cfs_rq(se)))
>>  			idle_task_delta = cfs_rq->h_nr_running;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ