[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-b0432d8f162c7d5d9537b4cb749d44076b76a783@git.kernel.org>
Date: Mon, 11 Apr 2011 10:46:23 GMT
From: tip-bot for Ken Chen <kenchen@...gle.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, hpa@...or.com, mingo@...hat.com,
a.p.zijlstra@...llo.nl, stable@...nel.org, kenchen@...gle.com,
tglx@...utronix.de, mingo@...e.hu
Subject: [tip:sched/urgent] sched: Fix sched-domain avg_load calculation
Commit-ID: b0432d8f162c7d5d9537b4cb749d44076b76a783
Gitweb: http://git.kernel.org/tip/b0432d8f162c7d5d9537b4cb749d44076b76a783
Author: Ken Chen <kenchen@...gle.com>
AuthorDate: Thu, 7 Apr 2011 17:23:22 -0700
Committer: Ingo Molnar <mingo@...e.hu>
CommitDate: Mon, 11 Apr 2011 11:08:54 +0200
sched: Fix sched-domain avg_load calculation
In function find_busiest_group(), the sched-domain avg_load isn't
calculated at all if there is a group imbalance within the domain. This
will cause erroneous imbalance calculation.
The reason is that calculate_imbalance() sees sds->avg_load = 0 and it
will dump entire sds->max_load into imbalance variable, which is used
later on to migrate entire load from busiest CPU to the puller CPU.
This has two really bad effect:
1. stampede of task migration, and they won't be able to break out
of the bad state because of positive feedback loop: large load
delta -> heavier load migration -> larger imbalance and the cycle
goes on.
2. severe imbalance in CPU queue depth. This causes really long
scheduling latency blip which affects badly on application that
has tight latency requirement.
The fix is to have kernel calculate domain avg_load in both cases. This
will ensure that imbalance calculation is always sensible and the target
is usually half way between busiest and puller CPU.
Signed-off-by: Ken Chen <kenchen@...gle.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: <stable@...nel.org>
Link: http://lkml.kernel.org/r/20110408002322.3A0D812217F@elm.corp.google.com
Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
kernel/sched_fair.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 7f00772..60f9d40 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -3127,6 +3127,8 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
if (!sds.busiest || sds.busiest_nr_running == 0)
goto out_balanced;
+ sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr;
+
/*
* If the busiest group is imbalanced the below checks don't
* work because they assumes all things are equal, which typically
@@ -3151,7 +3153,6 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
* Don't pull any tasks if this group is already above the domain
* average load.
*/
- sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr;
if (sds.this_load >= sds.avg_load)
goto out_balanced;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists