[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100409062119.10AC5CBB6D@localhost.localdomain>
Date: Fri, 09 Apr 2010 16:21:19 +1000
From: Michael Neuling <mikey@...ling.org>
To: Peter Zijlstra <peterz@...radead.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>
CC: <linuxppc-dev@...abs.org>, <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Gautham R Shenoy <ego@...ibm.com>
Subject: [PATCH 5/5] sched: make fix_small_imbalance work with asymmetric packing
With the asymmetric packing infrastructure, fix_small_imbalance is
causing idle higher threads to pull tasks off lower threads.
This is being caused by an off-by-one error.
Signed-off-by: Michael Neuling <mikey@...ling.org>
---
I'm not sure this is the right fix but without it, higher threads pull
tasks off the lower threads, then the packing pulls it back down, etc
etc and tasks bounce around constantly.
---
kernel/sched_fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6-ozlabs/kernel/sched_fair.c
===================================================================
--- linux-2.6-ozlabs.orig/kernel/sched_fair.c
+++ linux-2.6-ozlabs/kernel/sched_fair.c
@@ -2652,7 +2652,7 @@ static inline void fix_small_imbalance(s
* SCHED_LOAD_SCALE;
scaled_busy_load_per_task /= sds->busiest->cpu_power;
- if (sds->max_load - sds->this_load + scaled_busy_load_per_task >=
+ if (sds->max_load - sds->this_load + scaled_busy_load_per_task >
(scaled_busy_load_per_task * imbn)) {
*imbalance = sds->busiest_load_per_task;
return;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists