[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1371045758-5296-2-git-send-email-fweisbec@gmail.com>
Date: Wed, 12 Jun 2013 16:02:33 +0200
From: Frederic Weisbecker <fweisbec@...il.com>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Frederic Weisbecker <fweisbec@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Li Zhong <zhong@...ux.vnet.ibm.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>
Subject: [PATCH 1/6] sched: Disable lb_bias feature for full dynticks
If we run in full dynticks mode, we currently have no way to
correctly update the secondary decaying indexes of the CPU
load stats as it is typically maintained by update_cpu_load_active()
at each tick.
We have an available infrastructure that handles tickless loads
(cf: decay_load_missed) but it only works for idle tickless loads,
which only applies if the CPU hasn't run any real task but idle on
the tickless timeslice.
Until we can provide a sane mathematical solution to handle full
dynticks loads, lets simply deactivate the LB_BIAS sched feature
if CONFIG_NO_HZ_FULL as it is currently the only user of the decayed
load records.
The first load index that represents the current runqueue load weight
is still maintained and usable.
Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Li Zhong <zhong@...ux.vnet.ibm.com>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Borislav Petkov <bp@...en8.de>
---
kernel/sched/fair.c | 13 +++++++++++--
kernel/sched/features.h | 3 +++
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c61a614..81b62d6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2922,6 +2922,15 @@ static unsigned long weighted_cpuload(const int cpu)
return cpu_rq(cpu)->load.weight;
}
+static inline int sched_lb_bias(void)
+{
+#ifndef CONFIG_NO_HZ_FULL
+ return sched_feat(LB_BIAS);
+#else
+ return 0;
+#endif
+}
+
/*
* Return a low guess at the load of a migration-source cpu weighted
* according to the scheduling class and "nice" value.
@@ -2934,7 +2943,7 @@ static unsigned long source_load(int cpu, int type)
struct rq *rq = cpu_rq(cpu);
unsigned long total = weighted_cpuload(cpu);
- if (type == 0 || !sched_feat(LB_BIAS))
+ if (type == 0 || !sched_lb_bias())
return total;
return min(rq->cpu_load[type-1], total);
@@ -2949,7 +2958,7 @@ static unsigned long target_load(int cpu, int type)
struct rq *rq = cpu_rq(cpu);
unsigned long total = weighted_cpuload(cpu);
- if (type == 0 || !sched_feat(LB_BIAS))
+ if (type == 0 || !sched_lb_bias())
return total;
return max(rq->cpu_load[type-1], total);
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 99399f8..635f902 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -43,7 +43,10 @@ SCHED_FEAT(ARCH_POWER, true)
SCHED_FEAT(HRTICK, false)
SCHED_FEAT(DOUBLE_TICK, false)
+
+#ifndef CONFIG_NO_HZ_FULL
SCHED_FEAT(LB_BIAS, true)
+#endif
/*
* Decrement CPU power based on time not spent running tasks
--
1.7.5.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists