lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Tue, 11 Aug 2015 09:42:43 +0900
From:	byungchul.park@....com
To:	mingo@...nel.org, peterz@...radead.org
Cc:	linux-kernel@...r.kernel.org,
	Byungchul Park <byungchul.park@....com>
Subject: [PATCH v2] sched: sync with the prev cfs when changing cgroup within a cpu

From: Byungchul Park <byungchul.park@....com>

change from v1 to v2
* use #ifdef CONFIG_SMP to load tracking code
* make commit message compact which made confused

----->8-----
>From 02edcf69369bed72916304b449b82a74029ea908 Mon Sep 17 00:00:00 2001
From: Byungchul Park <byungchul.park@....com>
Date: Tue, 11 Aug 2015 09:30:17 +0900
Subject: [PATCH v2] sched: sync with the prev cfs when changing cgroup within
 a cpu

current code seems to be wrong with cfs_rq->blocked_load_avg when changing
a task's cgroup(=cfs_rq) to another. i tested with "echo pid > cgroup" and
found that cfs_rq->blocked_load_avg became larger and larger whenever i
changed a cgroup to another again and again.

it is possible to move between groups within *a* cpu, and each cfs_rq is
tracking its own blocked load. so we have to sync se's average load with
both *prev* cfs_rq and next cfs_rq when changing its group. i also removed
some comments mentioning migration_task_rq_fair().

Signed-off-by: Byungchul Park <byungchul.park@....com>
---
 kernel/sched/fair.c |   18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ffa70dc..759a394 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8229,8 +8229,18 @@ static void task_move_group_fair(struct task_struct *p, int queued)
 	if (!queued && (!se->sum_exec_runtime || p->state == TASK_WAKING))
 		queued = 1;
 
-	if (!queued)
-		se->vruntime -= cfs_rq_of(se)->min_vruntime;
+	if (!queued) {
+		cfs_rq = cfs_rq_of(se);
+		se->vruntime -= cfs_rq->min_vruntime;
+
+#ifdef CONFIG_SMP
+		/*
+		 * we must synchronize with the prev cfs.
+		 */
+		__synchronize_entity_decay(se);
+		subtract_blocked_load_contrib(cfs_rq, se->avg.load_avg_contrib);
+#endif
+	}
 	set_task_rq(p, task_cpu(p));
 	se->depth = se->parent ? se->parent->depth + 1 : 0;
 	if (!queued) {
@@ -8238,9 +8248,7 @@ static void task_move_group_fair(struct task_struct *p, int queued)
 		se->vruntime += cfs_rq->min_vruntime;
 #ifdef CONFIG_SMP
 		/*
-		 * migrate_task_rq_fair() will have removed our previous
-		 * contribution, but we must synchronize for ongoing future
-		 * decay.
+		 * we must synchronize with the next cfs for ongoing future decay.
 		 */
 		se->avg.decay_count = atomic64_read(&cfs_rq->decay_counter);
 		cfs_rq->blocked_load_avg += se->avg.load_avg_contrib;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ