lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170424213528.GB23619@wtj.duckdns.org>
Date:   Mon, 24 Apr 2017 14:35:28 -0700
From:   Tejun Heo <tj@...nel.org>
To:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
        Chris Mason <clm@...com>, kernel-team@...com
Subject: [PATCH 3/2] sched/fair: Skip __update_load_avg() on cfs_rq
 sched_entities

Now that a cfs_rq sched_entity's load_avg always gets propagated from
the associated cfs_rq, there's no point in calling __update_load_avg()
on it.  The two mechanisms compete with each other and we'd be always
using a value close to the propagated one anyway.

Skip __update_load_avg() for cfs_rq sched_entities.  Also, relocate
propagate_entity_load_avg() to signify that propagation is the
counterpart to __update_load_avg() for cfs_rq sched_entities.  This
puts the propagation before update_cfs_rq_load_avg() which shouldn't
disturb anything.

Signed-off-by: Tejun Heo <tj@...nel.org>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Mike Galbraith <efault@....de>
Cc: Paul Turner <pjt@...gle.com>
---
Hello,

A follow-up patch.  This removes __update_load_avg() on cfs_rq se's as
the value is now constantly kept in sync from cfs_rq.  The patch
doesn't cause any noticable changes in tets.

Thanks.

 kernel/sched/fair.c |   16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3293,20 +3293,22 @@ static inline void update_load_avg(struc
 	u64 now = cfs_rq_clock_task(cfs_rq);
 	struct rq *rq = rq_of(cfs_rq);
 	int cpu = cpu_of(rq);
-	int decayed;
+	int decayed = 0;
 
 	/*
 	 * Track task load average for carrying it to new CPU after migrated, and
 	 * track group sched_entity load average for task_h_load calc in migration
 	 */
-	if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) {
-		__update_load_avg(now, cpu, &se->avg,
-			  se->on_rq * scale_load_down(se->load.weight),
-			  cfs_rq->curr == se, NULL);
+	if (entity_is_task(se)) {
+		if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD))
+			__update_load_avg(now, cpu, &se->avg,
+					  se->on_rq * scale_load_down(se->load.weight),
+					  cfs_rq->curr == se, NULL);
+	} else {
+		decayed |= propagate_entity_load_avg(se);
 	}
 
-	decayed  = update_cfs_rq_load_avg(now, cfs_rq, true);
-	decayed |= propagate_entity_load_avg(se);
+	decayed |= update_cfs_rq_load_avg(now, cfs_rq, true);
 
 	if (decayed && (flags & UPDATE_TG))
 		update_tg_load_avg(cfs_rq, 0);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ