lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1267033418.16023.326.camel@laptop>
Date:	Wed, 24 Feb 2010 18:43:38 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Suresh Siddha <suresh.b.siddha@...el.com>
Cc:	Ingo Molnar <mingo@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	"Ma, Ling" <ling.ma@...el.com>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	"ego@...ibm.com" <ego@...ibm.com>,
	"svaidy@...ux.vnet.ibm.com" <svaidy@...ux.vnet.ibm.com>,
	Arun R Bharadwaj <arun@...ux.vnet.ibm.com>
Subject: Re: change in sched cpu_power causing regressions with SCHED_MC

On Tue, 2010-02-23 at 16:13 -0800, Suresh Siddha wrote:
> 
> Ok. Here is the patch with complete changelog. I added "Cc stable" tag
> so that it can be picked up for 2.6.32 and 2.6.33, as we would like to
> see this regression addressed in those kernels. Peter/Ingo: Can you
> please queue this patch for -tip for 2.6.34?
> 

Have picked it up with the following changes, Thanks!


Index: linux-2.6/kernel/sched_fair.c
===================================================================
--- linux-2.6.orig/kernel/sched_fair.c
+++ linux-2.6/kernel/sched_fair.c
@@ -2471,10 +2471,6 @@ static inline void update_sg_lb_stats(st
 	/* Adjust by relative CPU power of the group */
 	sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power;
 
-
-	if (sgs->sum_nr_running)
-		avg_load_per_task =
-				sgs->sum_weighted_load / sgs->sum_nr_running;
 	/*
 	 * Consider the group unbalanced when the imbalance is larger
 	 * than the average weight of two tasks.
@@ -2484,6 +2480,9 @@ static inline void update_sg_lb_stats(st
 	 *      normalized nr_running number somewhere that negates
 	 *      the hierarchy?
 	 */
+	if (sgs->sum_nr_running)
+		avg_load_per_task = sgs->sum_weighted_load / sgs->sum_nr_running;
+
 	if ((max_cpu_load - min_cpu_load) > 2*avg_load_per_task)
 		sgs->group_imb = 1;
 
@@ -2642,6 +2641,13 @@ static inline void calculate_imbalance(s
 		unsigned long *imbalance)
 {
 	unsigned long max_pull, load_above_capacity = ~0UL;
+
+	sds.busiest_load_per_task /= sds.busiest_nr_running;
+	if (sds.group_imb) {
+		sds.busiest_load_per_task = 
+			min(sds.busiest_load_per_task, sds.avg_load);
+	}
+
 	/*
 	 * In the presence of smp nice balancing, certain scenarios can have
 	 * max load less than avg load(as we skip the groups at or below
@@ -2742,7 +2748,6 @@ find_busiest_group(struct sched_domain *
 	 * 4) This group is more busy than the avg busieness at this
 	 *    sched_domain.
 	 * 5) The imbalance is within the specified limit.
-	 * 6) Any rebalance would lead to ping-pong
 	 */
 	if (!(*balance))
 		goto ret;
@@ -2761,12 +2766,6 @@ find_busiest_group(struct sched_domain *
 	if (100 * sds.max_load <= sd->imbalance_pct * sds.this_load)
 		goto out_balanced;
 
-	sds.busiest_load_per_task /= sds.busiest_nr_running;
-	if (sds.group_imb)
-		sds.busiest_load_per_task =
-			min(sds.busiest_load_per_task, sds.avg_load);
-
-
 	/* Looks like there is an imbalance. Compute it */
 	calculate_imbalance(&sds, this_cpu, imbalance);
 	return sds.busiest;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ