lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1353157457-3649-5-git-send-email-alex.shi@intel.com>
Date:	Sat, 17 Nov 2012 21:04:16 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	mingo@...hat.com, peterz@...radead.org, pjt@...gle.com,
	preeti@...ux.vnet.ibm.com, vincent.guittot@...aro.org
Cc:	linux-kernel@...r.kernel.org
Subject: [RFC PATCH 4/5] sched: consider runnable load average in wake_affine and move_tasks

Except using runnable load average in background, wake_affine and
move_tasks is also the key functions in load balance. We need consider
the runnable load average in them in order to the apple to apple load
comparison in load balance.

Signed-off-by: Alex Shi <alex.shi@...el.com>
---
 kernel/sched/fair.c |   16 ++++++++++------
 1 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f918919..7064a13 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3164,8 +3164,10 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 		tg = task_group(current);
 		weight = current->se.load.weight;
 
-		this_load += effective_load(tg, this_cpu, -weight, -weight);
-		load += effective_load(tg, prev_cpu, 0, -weight);
+		this_load += effective_load(tg, this_cpu, -weight, -weight)
+				* cpu_rq(this_cpu)->avg.load_avg_contrib;
+		load += effective_load(tg, prev_cpu, 0, -weight)
+				* cpu_rq(prev_cpu)->avg.load_avg_contrib;
 	}
 
 	tg = task_group(p);
@@ -3185,12 +3187,14 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
 
 		this_eff_load = 100;
 		this_eff_load *= power_of(prev_cpu);
-		this_eff_load *= this_load +
-			effective_load(tg, this_cpu, weight, weight);
+		this_eff_load *= (this_load +
+			effective_load(tg, this_cpu, weight, weight))
+				* cpu_rq(this_cpu)->avg.load_avg_contrib;
 
 		prev_eff_load = 100 + (sd->imbalance_pct - 100) / 2;
 		prev_eff_load *= power_of(this_cpu);
-		prev_eff_load *= load + effective_load(tg, prev_cpu, 0, weight);
+		prev_eff_load *= (load + effective_load(tg, prev_cpu, 0, weight))
+				* cpu_rq(prev_cpu)->avg.load_avg_contrib;
 
 		balanced = this_eff_load <= prev_eff_load;
 	} else
@@ -4229,7 +4233,7 @@ static int move_tasks(struct lb_env *env)
 		if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
 			goto next;
 
-		load = task_h_load(p);
+		load = task_h_load(p) * p->se.avg.load_avg_contrib;
 
 		if (sched_feat(LB_MIN) && load < 16 && !env->failed)
 			goto next;
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ