[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d020fb8d-ed73-56c7-bf10-d13b3617bfd0@bytedance.com>
Date: Thu, 14 Jul 2022 21:03:20 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Dietmar Eggemann <dietmar.eggemann@....com>, mingo@...hat.com,
peterz@...radead.org, vincent.guittot@...aro.org,
rostedt@...dmis.org, bsegall@...gle.com, vschneid@...hat.com
Cc: linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH v2 03/10] sched/fair: maintain task se
depth in set_task_rq()
On 2022/7/14 20:30, Dietmar Eggemann wrote:
> On 13/07/2022 06:04, Chengming Zhou wrote:
>> Previously we only maintain task se depth in task_move_group_fair(),
>> if a !fair task change task group, its se depth will not be updated,
>> so commit eb7a59b2c888 ("sched/fair: Reset se-depth when task switched to FAIR")
>> fix the problem by updating se depth in switched_to_fair() too.
>
> Maybe it's worth mentioning how the se.depth setting from
> task_move_group_fair() and switched_to_fair() went into
> attach_task_cfs_rq() with commit daa59407b558 ("sched/fair: Unify
> switched_{from,to}_fair() and task_move_group_fair()"} and further into
> attach_entity_cfs_rq() with commit df217913e72e ("sched/fair: Factorize
> attach/detach entity").
>
Good point, I will add this part in the next version.
Thanks for your review!
>> This patch move task se depth maintainence to set_task_rq(), which will be
>> called when CPU/cgroup change, so its depth will always be correct.
>>
>> This patch is preparation for the next patch.
>>
>> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
>
> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
>
>> ---
>> kernel/sched/fair.c | 8 --------
>> kernel/sched/sched.h | 1 +
>> 2 files changed, 1 insertion(+), 8 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 2a3e12ead144..bf595b622656 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -11539,14 +11539,6 @@ static void attach_entity_cfs_rq(struct sched_entity *se)
>> {
>> struct cfs_rq *cfs_rq = cfs_rq_of(se);
>>
>> -#ifdef CONFIG_FAIR_GROUP_SCHED
>> - /*
>> - * Since the real-depth could have been changed (only FAIR
>> - * class maintain depth value), reset depth properly.
>> - */
>> - se->depth = se->parent ? se->parent->depth + 1 : 0;
>> -#endif
>> -
>> /* Synchronize entity with its cfs_rq */
>> update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LOAD);
>> attach_entity_load_avg(cfs_rq, se);
>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>> index aad7f5ee9666..8cc3eb7b86cd 100644
>> --- a/kernel/sched/sched.h
>> +++ b/kernel/sched/sched.h
>> @@ -1940,6 +1940,7 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
>> set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]);
>> p->se.cfs_rq = tg->cfs_rq[cpu];
>> p->se.parent = tg->se[cpu];
>> + p->se.depth = tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0;
>> #endif
>>
>> #ifdef CONFIG_RT_GROUP_SCHED
>
Powered by blists - more mailing lists