[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1360820921-2513-6-git-send-email-iamjoonsoo.kim@lge.com>
Date: Thu, 14 Feb 2013 14:48:38 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: [PATCH 5/8] sched: move up affinity check to mitigate useless redoing overhead
Currently, LBF_ALL_PINNED is cleared after affinity check is passed.
So, if can_migrate_task() is failed by small load value or small
imbalance value, we don't clear LBF_ALL_PINNED. At last, we trigger
'redo' in load_balance().
Imbalance value is often so small that any tasks cannot be moved
to other cpus and, of course, this situaltion may be continued after
we change the target cpu. So this patch clear LBF_ALL_PINNED in order
to mitigate useless redoing overhead, if can_migrate_task() is failed
by above reason.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 849bc8e..bb373f4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3888,9 +3888,9 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env,
/*
* We do not migrate tasks that are:
* 1) throttled_lb_pair, or
- * 2) task's load is too low, or
- * 3) task's too large to imbalance, or
- * 4) cannot be migrated to this CPU due to cpus_allowed, or
+ * 2) cannot be migrated to this CPU due to cpus_allowed, or
+ * 3) task's load is too low, or
+ * 4) task's too large to imbalance, or
* 5) running (obviously), or
* 6) are cache-hot on their current CPU.
*/
@@ -3898,16 +3898,6 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env,
if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
return 0;
- if (!lb_active) {
- *load = task_h_load(p);
- if (sched_feat(LB_MIN) &&
- *load < 16 && !env->sd->nr_balance_failed)
- return 0;
-
- if ((*load / 2) > env->imbalance)
- return 0;
- }
-
if (!cpumask_test_cpu(env->dst_cpu, tsk_cpus_allowed(p))) {
int new_dst_cpu;
@@ -3936,6 +3926,16 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env,
/* Record that we found atleast one task that could run on dst_cpu */
env->flags &= ~LBF_ALL_PINNED;
+ if (!lb_active) {
+ *load = task_h_load(p);
+ if (sched_feat(LB_MIN) &&
+ *load < 16 && !env->sd->nr_balance_failed)
+ return 0;
+
+ if ((*load / 2) > env->imbalance)
+ return 0;
+ }
+
if (task_running(env->src_rq, p)) {
schedstat_inc(p, se.statistics.nr_failed_migrations_running);
return 0;
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists