[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1358996820-23036-4-git-send-email-alex.shi@intel.com>
Date: Thu, 24 Jan 2013 11:06:45 +0800
From: Alex Shi <alex.shi@...el.com>
To: torvalds@...ux-foundation.org, mingo@...hat.com,
peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
pjt@...gle.com, namhyung@...nel.org, efault@....de
Cc: vincent.guittot@...aro.org, gregkh@...uxfoundation.org,
preeti@...ux.vnet.ibm.com, viresh.kumar@...aro.org,
linux-kernel@...r.kernel.org, alex.shi@...el.com
Subject: [patch v4 03/18] sched: fix find_idlest_group mess logical
There is 4 situations in the function:
1, no task allowed group;
so min_load = ULONG_MAX, this_load = 0, idlest = NULL
2, only local group task allowed;
so min_load = ULONG_MAX, this_load assigned, idlest = NULL
3, only non-local task group allowed;
so min_load assigned, this_load = 0, idlest != NULL
4, local group + another group are task allowed.
so min_load assigned, this_load assigned, idlest != NULL
Current logical will return NULL in first 3 kinds of scenarios.
And still return NULL, if idlest group is heavier then the
local group in the 4th situation.
Actually, I thought groups in situation 2,3 are also eligible to host
the task. And in 4th situation, agree to bias toward local group.
So, has this patch.
Signed-off-by: Alex Shi <alex.shi@...el.com>
Reviewed-by: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
---
kernel/sched/fair.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6d3a95d..3c7b09a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3181,6 +3181,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
int this_cpu, int load_idx)
{
struct sched_group *idlest = NULL, *group = sd->groups;
+ struct sched_group *this_group = NULL;
unsigned long min_load = ULONG_MAX, this_load = 0;
int imbalance = 100 + (sd->imbalance_pct-100)/2;
@@ -3215,14 +3216,19 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
if (local_group) {
this_load = avg_load;
- } else if (avg_load < min_load) {
+ this_group = group;
+ }
+ if (avg_load < min_load) {
min_load = avg_load;
idlest = group;
}
} while (group = group->next, group != sd->groups);
- if (!idlest || 100*this_load < imbalance*min_load)
- return NULL;
+ if (this_group && idlest != this_group)
+ /* Bias toward our group again */
+ if (100*this_load < imbalance*min_load)
+ idlest = this_group;
+
return idlest;
}
--
1.7.12
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists