[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3cc8def4-54ef-9ca5-7da9-eaa38ad9bd4c@bytedance.com>
Date: Thu, 21 Jul 2022 21:56:05 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>, mingo@...hat.com,
peterz@...radead.org, rostedt@...dmis.org, bsegall@...gle.com,
vschneid@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH v2 07/10] sched/fair: use update_load_avg()
to attach/detach entity load_avg
On 2022/7/20 23:34, Vincent Guittot wrote:
> On Wed, 20 Jul 2022 at 15:41, Chengming Zhou
> <zhouchengming@...edance.com> wrote:
>>
>> On 2022/7/19 18:29, Vincent Guittot wrote:
>>> On Fri, 15 Jul 2022 at 18:21, Chengming Zhou
>>> <zhouchengming@...edance.com> wrote:
>>>>
>
> ...
>
>>>>
>>>>>
>>>>> Looks to me that you want to replace this by your `freeze PELT when
>>>>> outside fair` model.
>>>>
>>>> Yes, want to freeze PELT for two !fair cases:
>>>>
>>>> 1. !fair task hasn't been fair before: will still have its init load_avg
>>>> when switch to fair.
>>>
>>> But I'm not sure it makes sense to keep these init values. As an
>>> example, the util_avg is set according to the cpu utilization at the
>>> time of the task creation. I would tend to decay them as these init
>>> values become less and less relevant.
>>>
>>> so we should return early in post_init_entity_util_avg() and don't set
>>> util_avg if sched class is not cfs
>>
>> Yes, this indeed is a problem if we attach this init sched_avg of !fair task.
>> I'm also not sure whether it make sense to keep them to 0 ? Will it cause
>> unfairness problem between cfs_rqs?
>
> Why should it cause an unfairness problem ? !fair tasks are not
> accounted and their pelt values will be decayed down to 0 after 320ms
> anyway (with the current implementation). So it's just like if you
> started directly after those 320ms
Thanks for your patient explain. IMHO, I am thinking if we have init sched_avg
for new fair task (A), but have 0 for new task switched from !fair (B). Then
what's the point of init sched_avg for the fair task?
The B task will need some time to reach its stable load value, so in this process
its cfs_rq may can't get enough shares? Imaging below scenario, if we have fair
task A and switched from !fair task B at the same time, could cause unfairness
between cfs0 and cfs1 ?
CPU0 tg CPU1
| / \ |
| / \ |
cfs0 cfs1
(A) (B)
If runnable_avg and util_avg are 0 when switched from !fair, so we need more time
to do load balance or CPU frequency adjust? I think it's the reason why we have
init sched_avg for new fair task. Should we care about these, or it will be no problem?
I'm not sure, I must have missed something :-)
Thanks!
>
>>
>>>
>>>>
>>>> 2. !fair task has been switched_from_fair(): will still keep its lastest
>>>> load_avg when switch to fair.
>>>>
>>>>>
>>>>>> This patch make update_load_avg() the only location of attach/detach,
>>>>>> and can handle these corner cases like change cgroup of NEW tasks,
>>>>>> by checking last_update_time before attach/detach.
>>>>>
>>>>> [...]
>>>>>
>>>>>> @@ -11527,9 +11522,7 @@ static void detach_entity_cfs_rq(struct sched_entity *se)
>>>>>> struct cfs_rq *cfs_rq = cfs_rq_of(se);
>>>>>>
>>>>>> /* Catch up with the cfs_rq and remove our load when we leave */
>>>>>> - update_load_avg(cfs_rq, se, 0);
>>>>>> - detach_entity_load_avg(cfs_rq, se);
>>>>>> - update_tg_load_avg(cfs_rq);
>>>>>> + update_load_avg(cfs_rq, se, UPDATE_TG | DO_DETACH);
>>>>>
>>>>> IMHO, the DO_[DE|AT]TACH comments in update_load_avg() would have to be
>>>>> updated in this case.
>>>>
>>>> Correct, will do.
>>>>
>>>> Thanks.
>>>>
>>>>>
>>>>> [...]
Powered by blists - more mailing lists