[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-50a2a3b246149d041065a67ccb3e98145f780a2f@git.kernel.org>
Date: Sun, 13 Sep 2015 03:59:01 -0700
From: tip-bot for Byungchul Park <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
hpa@...or.com, byungchul.park@....com, efault@....de,
tglx@...utronix.de, mingo@...nel.org, peterz@...radead.org
Subject: [tip:sched/core] sched/fair: Have task_move_group_fair()
unconditionally add the entity load to the runqueue
Commit-ID: 50a2a3b246149d041065a67ccb3e98145f780a2f
Gitweb: http://git.kernel.org/tip/50a2a3b246149d041065a67ccb3e98145f780a2f
Author: Byungchul Park <byungchul.park@....com>
AuthorDate: Thu, 20 Aug 2015 20:21:57 +0900
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Sun, 13 Sep 2015 09:52:46 +0200
sched/fair: Have task_move_group_fair() unconditionally add the entity load to the runqueue
Currently we conditionally add the entity load to the rq when moving
the task between cgroups.
This doesn't make sense as we always 'migrate' the task between
cgroups, so we should always migrate the load too.
[ The history here is that we used to only migrate the blocked load
which was only meaningfull when !queued. ]
Signed-off-by: Byungchul Park <byungchul.park@....com>
[ Rewrote the changelog. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: yuyang.du@...el.com
Link: http://lkml.kernel.org/r/1440069720-27038-3-git-send-email-byungchul.park@lge.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a72a71b..959b2ea 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8041,13 +8041,12 @@ static void task_move_group_fair(struct task_struct *p, int queued)
se->vruntime -= cfs_rq_of(se)->min_vruntime;
set_task_rq(p, task_cpu(p));
se->depth = se->parent ? se->parent->depth + 1 : 0;
- if (!queued) {
- cfs_rq = cfs_rq_of(se);
+ cfs_rq = cfs_rq_of(se);
+ if (!queued)
se->vruntime += cfs_rq->min_vruntime;
- /* Virtually synchronize task with its new cfs_rq */
- attach_entity_load_avg(cfs_rq, se);
- }
+ /* Virtually synchronize task with its new cfs_rq */
+ attach_entity_load_avg(cfs_rq, se);
}
void free_fair_sched_group(struct task_group *tg)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists