lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1461958364-675-5-git-send-email-dietmar.eggemann@arm.com>
Date:	Fri, 29 Apr 2016 20:32:41 +0100
From:	Dietmar Eggemann <dietmar.eggemann@....com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org,
	Morten Rasmussen <morten.rasmussen@....com>
Subject: [PATCH 4/7] sched/fair: Clean up the logic in fix_small_imbalance()

Avoid the need to add scaled_busy_load_per_task on both sides of the if
condition to determine whether imbalance has to be set to
busiest->load_per_task or not.

The imbn variable was introduced with commit 2dd73a4f09be ("[PATCH]
sched: implement smpnice") and the original if condition was

  if (max_load - this_load >= busiest_load_per_task * imbn)

which over time changed into the current version where
scaled_busy_load_per_task is to be found on both sides of
the if condition.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
---

The original smpnice implementation sets imbalance to the avg task
load of the busiest sched group (sg) if the difference between the avg 
load of the busiest sg and the local sg is greater or equal 2 times
(imbn) the avg task load of the busiest sg if there is no task running
in the local sg or the avg task load of the busiest sg is smaller or
equal the avg task load of the local sg. Otherwise the imbn factor is
lowered to 1.

imbn set to 2 makes sense when all the tasks have the same priority so
in case there are n tasks on local sd there have to be n+2 tasks on the 
busiest sg to give load balance the chance to pull over one task.

imbn set to 1 makes sense when the avg task load of the busiest sg is
greater than the one of the local sg since there could be at least one 
task with a smaller load on the busiest sg which can be pulled over to
the local sg.

The current version lowered imbn effectively by one, so we use 1 instead
of 2 in case the avg task load of sg_busiest isn't greater than the one 
of sg_local and 0 instead of 1 in the other case. This behaviour is not in
sync with the explanation above on why we set imbn to certain values.

 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c066574cff04..dc4828bbe50d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6915,7 +6915,7 @@ static inline
 void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
 {
 	unsigned long tmp, capa_now = 0, capa_move = 0;
-	unsigned int imbn = 2;
+	unsigned int imbn = 1;
 	unsigned long scaled_busy_load_per_task;
 	struct sg_lb_stats *local, *busiest;
 
@@ -6925,13 +6925,13 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
 	if (!local->sum_nr_running)
 		local->load_per_task = cpu_avg_load_per_task(env->dst_cpu);
 	else if (busiest->load_per_task > local->load_per_task)
-		imbn = 1;
+		imbn = 0;
 
 	scaled_busy_load_per_task =
 		(busiest->load_per_task * SCHED_CAPACITY_SCALE) /
 		busiest->group_capacity;
 
-	if (busiest->avg_load + scaled_busy_load_per_task >=
+	if (busiest->avg_load >=
 	    local->avg_load + (scaled_busy_load_per_task * imbn)) {
 		env->imbalance = busiest->load_per_task;
 		return;
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ