[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1466377363-20933-4-git-send-email-yuyang.du@intel.com>
Date: Mon, 20 Jun 2016 07:02:42 +0800
From: Yuyang Du <yuyang.du@...el.com>
To: peterz@...radead.org, mingo@...nel.org,
linux-kernel@...r.kernel.org
Cc: bsegall@...gle.com, pjt@...gle.com, morten.rasmussen@....com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
matt@...eblueprint.co.uk, Yuyang Du <yuyang.du@...el.com>
Subject: [PATCH v7 3/4] sched/fair: Skip detach sched avgs for new task when changing task groups
Newly forked task has not been enqueued, so should not be removed from
cfs_rq in task_move_group_fair(). To do so, we identify newly forked
tasks if their sum_exec_runtime is 0, an existing heuristic as per
vruntime_normalized(). In addition to that, uniformly use this test
in remove_entity_load_avg().
Signed-off-by: Yuyang Du <yuyang.du@...el.com>
---
kernel/sched/fair.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 35d76cf..c1de063 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2943,6 +2943,10 @@ static inline void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_en
static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
+ /* Newly forked tasks are not attached yet. */
+ if (!se->sum_exec_runtime)
+ return;
+
__update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
&se->avg, se->on_rq * scale_load_down(se->load.weight),
cfs_rq->curr == se, NULL);
@@ -3033,7 +3037,7 @@ void remove_entity_load_avg(struct sched_entity *se)
* Newly created task or never used group entity should not be removed
* from its (source) cfs_rq
*/
- if (se->avg.last_update_time == 0)
+ if (!se->sum_exec_runtime)
return;
last_update_time = cfs_rq_last_update_time(cfs_rq);
--
1.7.9.5
Powered by blists - more mailing lists