lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Jun 2016 07:02:40 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	peterz@...radead.org, mingo@...nel.org,
	linux-kernel@...r.kernel.org
Cc:	bsegall@...gle.com, pjt@...gle.com, morten.rasmussen@....com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	matt@...eblueprint.co.uk, Yuyang Du <yuyang.du@...el.com>
Subject: [PATCH v7 1/4] sched/fair: Fix attaching task sched avgs twice when switching to fair or changing task group

Vincent reported that the first task to a new task group's cfs_rq will
be attached in attach_task_cfs_rq() and once more when it is enqueued
(see https://lkml.org/lkml/2016/5/25/388).

Actually, it is worse. The sched avgs can be sometimes attached twice
not only when we change task groups but also when we switch to fair class.
These two scenarios will be described in the following respectively.

(1) Switch to fair class:

The sched class change is done like this:

	if (queued)
	  enqueue_task();
	check_class_changed()
	  switched_from()
	  switched_to()

If the task is on_rq, before switched_to(), it has been enqueued, which
already attached sched avgs to the cfs_rq if the task's last_update_time
is 0, which can happen if the task was never fair class, if so, we
shouldn't attach it again in switched_to(), otherwise, we attach it twice.

To address both the on_rq and !on_rq cases, as well as both the task
was switched from fair and otherwise, the simplest solution is to reset
the task's last_update_time to 0 when the task is switched from fair.
Then let task enqueue do the sched avgs attachment uniformly only once.

(2) Change between fair task groups:

The task groups are changed like this:

	if (queued)
          dequeue_task()
	task_move_group()
	if (queued)
          enqueue_task()

Unlike the switch to fair class case, if the task is on_rq, it will be
enqueued right away after we move task groups, and if not, in the future
when the task is runnable. The attach twice problem can happen if the
cfs_rq and the task are both new as Vincent discovered. The simplest
solution is to only reset the task's last_update_time in task_move_group(),
and then let enqueue_task() do the sched avgs attachment.

Reported-by: Vincent Guittot <vincent.guittot@...aro.org>
Signed-off-by: Yuyang Du <yuyang.du@...el.com>
---
 kernel/sched/fair.c |   82 ++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 55 insertions(+), 27 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f75930b..d43d242 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2933,26 +2933,9 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg)
 		update_tg_load_avg(cfs_rq, 0);
 }
 
-static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
+/* Synchronize task with its cfs_rq */
+static inline void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
-	if (!sched_feat(ATTACH_AGE_LOAD))
-		goto skip_aging;
-
-	/*
-	 * If we got migrated (either between CPUs or between cgroups) we'll
-	 * have aged the average right before clearing @last_update_time.
-	 */
-	if (se->avg.last_update_time) {
-		__update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
-				  &se->avg, 0, 0, NULL);
-
-		/*
-		 * XXX: we could have just aged the entire load away if we've been
-		 * absent from the fair class for too long.
-		 */
-	}
-
-skip_aging:
 	se->avg.last_update_time = cfs_rq->avg.last_update_time;
 	cfs_rq->avg.load_avg += se->avg.load_avg;
 	cfs_rq->avg.load_sum += se->avg.load_sum;
@@ -3036,6 +3019,11 @@ static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
 }
 #endif
 
+static inline void reset_task_last_update_time(struct task_struct *p)
+{
+	p->se.avg.last_update_time = 0;
+}
+
 /*
  * Task first catches up with cfs_rq, and then subtract
  * itself from the cfs_rq (task must be off the queue now).
@@ -3088,9 +3076,8 @@ dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
 static inline void remove_entity_load_avg(struct sched_entity *se) {}
 
 static inline void
-attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
-static inline void
 detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
+static inline void reset_task_last_update_time(struct task_struct *p) {}
 
 static inline int idle_balance(struct rq *rq)
 {
@@ -8376,8 +8363,19 @@ static void attach_task_cfs_rq(struct task_struct *p)
 	se->depth = se->parent ? se->parent->depth + 1 : 0;
 #endif
 
-	/* Synchronize task with its cfs_rq */
-	attach_entity_load_avg(cfs_rq, se);
+	/*
+	 * At this point, we don't do sched avgs attachment. Instead,
+	 * we uniformly do the attachement at enqueue time in all
+	 * scenarios:
+	 *
+	 * (1) Actually, we may have already done it (e.g., at enqueue_task()
+	 * in __sched_setscheduler() when a task switches to fair class).
+	 *
+	 * (2) We will do it right away (e.g., in sched_move_task() when
+	 * an on_rq task moves between groups)
+	 *
+	 * (3) In the future when this currently !on_rq task is enqueued.
+	 */
 
 	if (!vruntime_normalized(p))
 		se->vruntime += cfs_rq->min_vruntime;
@@ -8386,6 +8384,26 @@ static void attach_task_cfs_rq(struct task_struct *p)
 static void switched_from_fair(struct rq *rq, struct task_struct *p)
 {
 	detach_task_cfs_rq(p);
+	/*
+	 * If the task changes back to fair class, we will attach the
+	 * task's sched avgs when it is enqueued, as the task's last_update_time
+	 * is reset to 0.
+	 *
+	 * This simplifies sched avgs attachment when a task is switched
+	 * to fair class, as we can unify the case where the task was
+	 * never fair class and the case where the task was fair class.
+	 *
+	 * Having lost last_update_time, we can't age the task's sched avgs
+	 * before attaching them, so the last updated sched avgs (in the
+	 * above detach_task_cfs_rq()) will be used.
+	 *
+	 * Typically the time between switched_from_fair() and switched_to_fair()
+	 * most likely could have just aged the entire load/util away.
+	 * Despite that, there is no better way to account for the lost
+	 * record of sched avgs during that time, therefore, we simply
+	 * lean toward no aging at all.
+	 */
+	reset_task_last_update_time(p);
 }
 
 static void switched_to_fair(struct rq *rq, struct task_struct *p)
@@ -8441,12 +8459,22 @@ static void task_move_group_fair(struct task_struct *p)
 {
 	detach_task_cfs_rq(p);
 	set_task_rq(p, task_cpu(p));
-
+	attach_task_cfs_rq(p);
 #ifdef CONFIG_SMP
-	/* Tell se's cfs_rq has been changed -- migrated */
-	p->se.avg.last_update_time = 0;
+	/*
+	 * If the cfs_rq's last_update_time is 0, attach the sched avgs
+	 * won't be anything useful, as it will be decayed to 0 when any
+	 * sched_entity is enqueued to that cfs_rq.
+	 *
+	 * On the other hand, if the cfs_rq's last_update_time is 0, we
+	 * must reset the task's last_update_time to ensure we will attach
+	 * the sched avgs when the task is enqueued.
+	 */
+	if (!cfs_rq_of(&p->se)->avg.last_update_time)
+		reset_task_last_update_time(p);
+	else
+		attach_entity_load_avg(cfs_rq_of(&p->se), &p->se);
 #endif
-	attach_task_cfs_rq(p);
 }
 
 void free_fair_sched_group(struct task_group *tg)
-- 
1.7.9.5

Powered by blists - more mailing lists