[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220818034343.87625-2-zhouchengming@bytedance.com>
Date: Thu, 18 Aug 2022 11:43:35 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: vincent.guittot@...aro.org, dietmar.eggemann@....com,
mingo@...hat.com, peterz@...radead.org, rostedt@...dmis.org,
bsegall@...gle.com, vschneid@...hat.com
Cc: linux-kernel@...r.kernel.org, tj@...nel.org,
Chengming Zhou <zhouchengming@...edance.com>
Subject: [PATCH v5 1/9] sched/fair: maintain task se depth in set_task_rq()
Previously we only maintain task se depth in task_move_group_fair(),
if a !fair task change task group, its se depth will not be updated,
so commit eb7a59b2c888 ("sched/fair: Reset se-depth when task switched to FAIR")
fix the problem by updating se depth in switched_to_fair() too.
Then commit daa59407b558 ("sched/fair: Unify switched_{from,to}_fair()
and task_move_group_fair()") unified these two functions, moved se.depth
setting to attach_task_cfs_rq(), which further into attach_entity_cfs_rq()
with commit df217913e72e ("sched/fair: Factorize attach/detach entity").
This patch move task se depth maintenance from attach_entity_cfs_rq()
to set_task_rq(), which will be called when CPU/cgroup change, so its
depth will always be correct.
This patch is preparation for the next patch.
Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>
---
kernel/sched/fair.c | 8 --------
kernel/sched/sched.h | 1 +
2 files changed, 1 insertion(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a71d6686149b..c5ee08b187ec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11726,14 +11726,6 @@ static void attach_entity_cfs_rq(struct sched_entity *se)
{
struct cfs_rq *cfs_rq = cfs_rq_of(se);
-#ifdef CONFIG_FAIR_GROUP_SCHED
- /*
- * Since the real-depth could have been changed (only FAIR
- * class maintain depth value), reset depth properly.
- */
- se->depth = se->parent ? se->parent->depth + 1 : 0;
-#endif
-
/* Synchronize entity with its cfs_rq */
update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LOAD);
attach_entity_load_avg(cfs_rq, se);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ddcfc7837595..628ffa974123 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1932,6 +1932,7 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]);
p->se.cfs_rq = tg->cfs_rq[cpu];
p->se.parent = tg->se[cpu];
+ p->se.depth = tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0;
#endif
#ifdef CONFIG_RT_GROUP_SCHED
--
2.37.2
Powered by blists - more mailing lists