[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtBo9GgkT1VeVA+1CBhiVZ8Vf+kp0XV3PWcBLGuZaz_hKQ@mail.gmail.com>
Date: Wed, 20 Jul 2022 17:34:32 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Chengming Zhou <zhouchengming@...edance.com>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>, mingo@...hat.com,
peterz@...radead.org, rostedt@...dmis.org, bsegall@...gle.com,
vschneid@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH v2 07/10] sched/fair: use update_load_avg()
to attach/detach entity load_avg
On Wed, 20 Jul 2022 at 15:41, Chengming Zhou
<zhouchengming@...edance.com> wrote:
>
> On 2022/7/19 18:29, Vincent Guittot wrote:
> > On Fri, 15 Jul 2022 at 18:21, Chengming Zhou
> > <zhouchengming@...edance.com> wrote:
> >>
...
> >>
> >>>
> >>> Looks to me that you want to replace this by your `freeze PELT when
> >>> outside fair` model.
> >>
> >> Yes, want to freeze PELT for two !fair cases:
> >>
> >> 1. !fair task hasn't been fair before: will still have its init load_avg
> >> when switch to fair.
> >
> > But I'm not sure it makes sense to keep these init values. As an
> > example, the util_avg is set according to the cpu utilization at the
> > time of the task creation. I would tend to decay them as these init
> > values become less and less relevant.
> >
> > so we should return early in post_init_entity_util_avg() and don't set
> > util_avg if sched class is not cfs
>
> Yes, this indeed is a problem if we attach this init sched_avg of !fair task.
> I'm also not sure whether it make sense to keep them to 0 ? Will it cause
> unfairness problem between cfs_rqs?
Why should it cause an unfairness problem ? !fair tasks are not
accounted and their pelt values will be decayed down to 0 after 320ms
anyway (with the current implementation). So it's just like if you
started directly after those 320ms
>
> >
> >>
> >> 2. !fair task has been switched_from_fair(): will still keep its lastest
> >> load_avg when switch to fair.
> >>
> >>>
> >>>> This patch make update_load_avg() the only location of attach/detach,
> >>>> and can handle these corner cases like change cgroup of NEW tasks,
> >>>> by checking last_update_time before attach/detach.
> >>>
> >>> [...]
> >>>
> >>>> @@ -11527,9 +11522,7 @@ static void detach_entity_cfs_rq(struct sched_entity *se)
> >>>> struct cfs_rq *cfs_rq = cfs_rq_of(se);
> >>>>
> >>>> /* Catch up with the cfs_rq and remove our load when we leave */
> >>>> - update_load_avg(cfs_rq, se, 0);
> >>>> - detach_entity_load_avg(cfs_rq, se);
> >>>> - update_tg_load_avg(cfs_rq);
> >>>> + update_load_avg(cfs_rq, se, UPDATE_TG | DO_DETACH);
> >>>
> >>> IMHO, the DO_[DE|AT]TACH comments in update_load_avg() would have to be
> >>> updated in this case.
> >>
> >> Correct, will do.
> >>
> >> Thanks.
> >>
> >>>
> >>> [...]
Powered by blists - more mailing lists