[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1536306664-29827-1-git-send-email-vincent.guittot@linaro.org>
Date: Fri, 7 Sep 2018 09:51:04 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: peterz@...radead.org, mingo@...nel.org,
linux-kernel@...r.kernel.org
Cc: dietmar.eggemann@....com, jhugo@...eaurora.org,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: [PATCH] sched/fair: fix load_balance redo for null imbalance
It can happen that load_balance finds a busiest group and then a busiest rq
but the calculated imbalance is in fact null.
In such situation, detach_tasks returns immediately and lets the flag
LBF_ALL_PINNED set. The busiest CPU is then wrongly assumed to have pinned
tasks and removed from the load balance mask. then, we redo a load balance
without the busiest CPU. This creates wrong load balance situation and
generates wrong task migration.
If the calculated imbalance is null, it's useless to try to find a busiest
rq as no task will be migrated and we can return immediately.
This situation can happen with heterogeneous system or smp system when RT
tasks are decreasing the capacity of some CPUs.
Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 309c93f..224bfae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8464,7 +8464,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
}
group = find_busiest_group(&env);
- if (!group) {
+ if (!group || !env.imbalance) {
schedstat_inc(sd->lb_nobusyg[idle]);
goto out_balanced;
}
--
2.7.4
Powered by blists - more mailing lists