[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1439425573-320-1-git-send-email-byungchul.park@lge.com>
Date: Thu, 13 Aug 2015 09:26:13 +0900
From: byungchul.park@....com
To: mingo@...nel.org, peterz@...radead.org
Cc: linux-kernel@...r.kernel.org, yuyang.du@...el.com,
Byungchul Park <byungchul.park@....com>
Subject: [PATCH v4] sched: sync with the prev cfs when changing cgroup within a cpu
From: Byungchul Park <byungchul.park@....com>
change from v3 to v4
* adjust cfs load in "queued" case, too
change from v2 to v3
* rebase to tip git
change from v1 to v2
* use #ifdef CONFIG_SMP to load tracking code
* make commit message compact which made confused
----->8-----
>From 1d5bcc21cece51eca250986846ed9b01a174bd54 Mon Sep 17 00:00:00 2001
From: Byungchul Park <byungchul.park@....com>
Date: Thu, 13 Aug 2015 09:18:07 +0900
Subject: [PATCH v4] sched: sync with the prev cfs when changing cgroup within
a cpu
current code seems to be wrong with cfs_rq's avg loads when changing
a task's cgroup(=cfs_rq) to another. i tested with "echo pid > cgroup" and
found that e.g. cfs_rq->avg.load_avg became larger and larger whenever i
changed a cgroup to another again and again.
we have to sync se's average load with both *prev* cfs_rq and next cfs_rq
when changing its group.
Signed-off-by: Byungchul Park <byungchul.park@....com>
---
kernel/sched/fair.c | 34 ++++++++++++++++++++++++----------
1 file changed, 24 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2a33d7b..979ca2c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8017,23 +8017,37 @@ static void task_move_group_fair(struct task_struct *p, int queued)
if (!queued && (!se->sum_exec_runtime || p->state == TASK_WAKING))
queued = 1;
+ cfs_rq = cfs_rq_of(se);
if (!queued)
- se->vruntime -= cfs_rq_of(se)->min_vruntime;
+ se->vruntime -= cfs_rq->min_vruntime;
+
+#ifdef CONFIG_SMP
+ /* synchronize task with its prev cfs_rq */
+ if (!queued)
+ __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
+ &se->avg, se->on_rq * scale_load_down(se->load.weight),
+ cfs_rq->curr == se, NULL);
+
+ /* remove our load when we leave */
+ cfs_rq->avg.load_avg = max_t(long, cfs_rq->avg.load_avg - se->avg.load_avg, 0);
+ cfs_rq->avg.load_sum = max_t(s64, cfs_rq->avg.load_sum - se->avg.load_sum, 0);
+ cfs_rq->avg.util_avg = max_t(long, cfs_rq->avg.util_avg - se->avg.util_avg, 0);
+ cfs_rq->avg.util_sum = max_t(s32, cfs_rq->avg.util_sum - se->avg.util_sum, 0);
+#endif
set_task_rq(p, task_cpu(p));
se->depth = se->parent ? se->parent->depth + 1 : 0;
- if (!queued) {
- cfs_rq = cfs_rq_of(se);
+ cfs_rq = cfs_rq_of(se);
+ if (!queued)
se->vruntime += cfs_rq->min_vruntime;
#ifdef CONFIG_SMP
- /* Virtually synchronize task with its new cfs_rq */
- p->se.avg.last_update_time = cfs_rq->avg.last_update_time;
- cfs_rq->avg.load_avg += p->se.avg.load_avg;
- cfs_rq->avg.load_sum += p->se.avg.load_sum;
- cfs_rq->avg.util_avg += p->se.avg.util_avg;
- cfs_rq->avg.util_sum += p->se.avg.util_sum;
+ /* Virtually synchronize task with its new cfs_rq */
+ p->se.avg.last_update_time = cfs_rq->avg.last_update_time;
+ cfs_rq->avg.load_avg += p->se.avg.load_avg;
+ cfs_rq->avg.load_sum += p->se.avg.load_sum;
+ cfs_rq->avg.util_avg += p->se.avg.util_avg;
+ cfs_rq->avg.util_sum += p->se.avg.util_sum;
#endif
- }
}
void free_fair_sched_group(struct task_group *tg)
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists