[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtBWXyamX0jFSvgP3VnZacd5SNb_Yg9jAq1y0koHwr7DxQ@mail.gmail.com>
Date: Fri, 15 Apr 2022 09:51:44 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Chengming Zhou <zhouchengming@...edance.com>
Cc: Benjamin Segall <bsegall@...gle.com>, mingo@...hat.com,
peterz@...radead.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, mgorman@...e.de,
bristot@...hat.com, linux-kernel@...r.kernel.org,
duanxiongchun@...edance.com, songmuchun@...edance.com,
zhengqi.arch@...edance.com
Subject: Re: [External] Re: [PATCH] sched/fair: update tg->load_avg and
se->load in throttle_cfs_rq()
On Fri, 15 Apr 2022 at 07:42, Chengming Zhou
<zhouchengming@...edance.com> wrote:
>
> On 2022/4/14 01:30, Benjamin Segall wrote:
> > Chengming Zhou <zhouchengming@...edance.com> writes:
> >
> >> We use update_load_avg(cfs_rq, se, 0) in throttle_cfs_rq(), so the
> >> cfs_rq->tg_load_avg_contrib and task_group->load_avg won't be updated
> >> even when the cfs_rq's load_avg has changed.
> >>
> >> And we also don't call update_cfs_group(se), so the se->load won't
> >> be updated too.
> >>
> >> Change to use update_load_avg(cfs_rq, se, UPDATE_TG) and add
> >> update_cfs_group(se) in throttle_cfs_rq(), like we do in
> >> dequeue_task_fair().
> >
> > Hmm, this does look more correct; Vincent, was having this not do
> > UPDATE_TG deliberate, or an accident that we all missed when checking?
The cost of UPDATE_TG/update_tg_load_avg() is not free and the parent
cfs->load_avg should not change because of the throttling but only the
cfs->weight so I don't see a real benefit of UPDATE_TG.
Chengming,
have you faced an issue or this change is based on code review ?
> >
> > It looks like the unthrottle_cfs_rq side got UPDATE_TG added later in
> > the two-loops pass, but not the throttle_cfs_rq side.
>
> Yes, UPDATE_TG was added in unthrottle_cfs_rq() in commit 39f23ce07b93
> ("sched/fair: Fix unthrottle_cfs_rq() for leaf_cfs_rq list").
>
> >
> > Also unthrottle_cfs_rq I'm guessing could still use update_cfs_group(se)
>
> It looks like we should also add update_cfs_group(se) in unthrottle_cfs_rq().
>
> Thanks.
>
> >
> >
> >>
> >> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
> >> ---
> >> kernel/sched/fair.c | 3 ++-
> >> 1 file changed, 2 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index d4bd299d67ab..b37dc1db7be7 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -4936,8 +4936,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
> >> if (!se->on_rq)
> >> goto done;
> >>
> >> - update_load_avg(qcfs_rq, se, 0);
> >> + update_load_avg(qcfs_rq, se, UPDATE_TG);
> >> se_update_runnable(se);
> >> + update_cfs_group(se);
> >>
> >> if (cfs_rq_is_idle(group_cfs_rq(se)))
> >> idle_task_delta = cfs_rq->h_nr_running;
Powered by blists - more mailing lists