[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-3e386d56bafbb6d2540b49367444997fc671ea69@git.kernel.org>
Date: Tue, 20 Oct 2015 02:31:22 -0700
From: tip-bot for Yuyang Du <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: peterz@...radead.org, torvalds@...ux-foundation.org,
mingo@...nel.org, linux-kernel@...r.kernel.org, efault@....de,
yuyang.du@...el.com, hpa@...or.com, tglx@...utronix.de,
dietmar.eggemann@....com
Subject: [tip:sched/core] sched/fair: Update task group'
s load_avg after task migration
Commit-ID: 3e386d56bafbb6d2540b49367444997fc671ea69
Gitweb: http://git.kernel.org/tip/3e386d56bafbb6d2540b49367444997fc671ea69
Author: Yuyang Du <yuyang.du@...el.com>
AuthorDate: Tue, 13 Oct 2015 09:18:23 +0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Tue, 20 Oct 2015 10:13:35 +0200
sched/fair: Update task group's load_avg after task migration
When cfs_rq has cfs_rq->removed_load_avg set (when a task migrates from
this cfs_rq), we need to update its contribution to the group's load_avg.
This should not increase tg's update too much, because in most cases, the
cfs_rq has already decayed its load_avg.
Tested-by: Dietmar Eggemann <dietmar.eggemann@....com>
Signed-off-by: Yuyang Du <yuyang.du@...el.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/1444699103-20272-2-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bc62c50..9a5e60f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2664,13 +2664,14 @@ static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq);
/* Group cfs_rq's load_avg is used for task_h_load and update_cfs_share */
static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
{
- int decayed;
struct sched_avg *sa = &cfs_rq->avg;
+ int decayed, removed = 0;
if (atomic_long_read(&cfs_rq->removed_load_avg)) {
long r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
sa->load_avg = max_t(long, sa->load_avg - r, 0);
sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0);
+ removed = 1;
}
if (atomic_long_read(&cfs_rq->removed_util_avg)) {
@@ -2688,7 +2689,7 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
cfs_rq->load_last_update_time_copy = sa->last_update_time;
#endif
- return decayed;
+ return decayed || removed;
}
/* Update task and its cfs_rq load average */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists