[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220526071354.6426-2-zhouchengming@bytedance.com>
Date: Thu, 26 May 2022 15:13:53 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com, odin@...d.al
Cc: linux-kernel@...r.kernel.org, duanxiongchun@...edance.com,
songmuchun@...edance.com,
Chengming Zhou <zhouchengming@...edance.com>
Subject: [PATCH v2 1/2] sched/fair: fix propagate during synchronous attach/detach
When a task moves from/to a cfs_rq, we first detach/attach the load_avg
of se from/to that cfs_rq, then propagate the changes across the tg tree
to make it visible to the root, which did in update_load_avg().
But the current code will break when encountering a on_list cfs_rq,
can't propagate up to the root cfs_rq, that also mismatch with the
comment of propagate_entity_cfs_rq(), which says "Propagate the changes
of the sched_entity across the tg tree to make it visible to the root".
The second problem is that it won't update_load_avg() for throttled
cfs_rq, cause the load changes can't be propagated upwards.
A
|
B --> throttled cfs_rq
/
C
The prop_runnable_sum of C won't be propagated to B, so won't be
propagated to A.
Fixes: 0258bdfaff5b ("sched/fair: Fix unfairness caused by missing load decay")
Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
---
kernel/sched/fair.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 523e548c8fdd..5276d05692e0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11303,14 +11303,8 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se);
- if (!cfs_rq_throttled(cfs_rq)){
- update_load_avg(cfs_rq, se, UPDATE_TG);
- list_add_leaf_cfs_rq(cfs_rq);
- continue;
- }
-
- if (list_add_leaf_cfs_rq(cfs_rq))
- break;
+ update_load_avg(cfs_rq, se, UPDATE_TG);
+ list_add_leaf_cfs_rq(cfs_rq);
}
}
#else
--
2.36.1
Powered by blists - more mailing lists