[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1476695653-12309-6-git-send-email-vincent.guittot@linaro.org>
Date: Mon, 17 Oct 2016 11:14:12 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: peterz@...radead.org, mingo@...nel.org,
linux-kernel@...r.kernel.org, dietmar.eggemann@....com
Cc: yuyang.du@...el.com, Morten.Rasmussen@....com,
linaro-kernel@...ts.linaro.org, pjt@...gle.com, bsegall@...gle.com,
kernellwp@...il.com, Vincent Guittot <vincent.guittot@...aro.org>
Subject: [PATCH 5/6 v5] sched: propagate asynchrous detach
A task can be asynchronously detached from cfs_rq when migrating
between CPUs. The load of the migrated task is then removed from
source cfs_rq during its next update. We use this event to set propagation
flag.
During the load balance, we take advanatge of the update of blocked load
to propagate any pending changes.The propagation relies on patch
"sched: fix hierarchical order in rq->leaf_cfs_rq_list", which orders
children and parents, to ensure that it's done in one pass.
Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
---
kernel/sched/fair.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 91fc949..4c0a404 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3231,6 +3231,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
sub_positive(&sa->load_avg, r);
sub_positive(&sa->load_sum, r * LOAD_AVG_MAX);
removed_load = 1;
+ set_tg_cfs_propagate(cfs_rq);
}
if (atomic_long_read(&cfs_rq->removed_util_avg)) {
@@ -3238,6 +3239,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
sub_positive(&sa->util_avg, r);
sub_positive(&sa->util_sum, r * LOAD_AVG_MAX);
removed_util = 1;
+ set_tg_cfs_propagate(cfs_rq);
}
decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa,
@@ -6803,6 +6805,10 @@ static void update_blocked_averages(int cpu)
if (update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, true))
update_tg_load_avg(cfs_rq, 0);
+
+ /* Propagate pending load changes to the parent */
+ if (cfs_rq->tg->se[cpu])
+ update_load_avg(cfs_rq->tg->se[cpu], 0);
}
raw_spin_unlock_irqrestore(&rq->lock, flags);
}
--
2.7.4
Powered by blists - more mailing lists