[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-e0f5f3afd2cffa96291cd852056d83ff4e2e99c7@git.kernel.org>
Date: Sun, 13 Sep 2015 04:03:23 -0700
From: tip-bot for Dietmar Eggemann <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: mingo@...nel.org, Dietmar.Eggemann@....com, tglx@...utronix.de,
linux-kernel@...r.kernel.org, hpa@...or.com, peterz@...radead.org,
efault@....de, torvalds@...ux-foundation.org,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
Juri.Lelli@....com, morten.rasmussen@....com
Subject: [tip:sched/core] sched/fair:
Make load tracking frequency scale-invariant
Commit-ID: e0f5f3afd2cffa96291cd852056d83ff4e2e99c7
Gitweb: http://git.kernel.org/tip/e0f5f3afd2cffa96291cd852056d83ff4e2e99c7
Author: Dietmar Eggemann <dietmar.eggemann@....com>
AuthorDate: Fri, 14 Aug 2015 17:23:09 +0100
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Sun, 13 Sep 2015 09:52:55 +0200
sched/fair: Make load tracking frequency scale-invariant
Apply frequency scaling correction factor to per-entity load tracking to
make it frequency invariant. Currently, load appears bigger when the CPU
is running slower which affects load-balancing decisions.
Each segment of the sched_avg.load_sum geometric series is now scaled by
the current frequency so that the sched_avg.load_avg of each sched entity
will be invariant from frequency scaling.
Moreover, cfs_rq.runnable_load_sum is scaled by the current frequency as
well.
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
Signed-off-by: Morten Rasmussen <morten.rasmussen@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <Dietmar.Eggemann@....com>
Cc: Juri Lelli <Juri.Lelli@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: daniel.lezcano@...aro.org
Cc: mturquette@...libre.com
Cc: pang.xunlei@....com.cn
Cc: rjw@...ysocki.net
Cc: sgurrappadi@...dia.com
Cc: yuyang.du@...el.com
Link: http://lkml.kernel.org/r/1439569394-11974-2-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
include/linux/sched.h | 6 +++---
kernel/sched/fair.c | 27 +++++++++++++++++----------
2 files changed, 20 insertions(+), 13 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index a4ab9da..c8d923b 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1177,9 +1177,9 @@ struct load_weight {
/*
* The load_avg/util_avg accumulates an infinite geometric series.
- * 1) load_avg factors the amount of time that a sched_entity is
- * runnable on a rq into its weight. For cfs_rq, it is the aggregated
- * such weights of all runnable and blocked sched_entities.
+ * 1) load_avg factors frequency scaling into the amount of time that a
+ * sched_entity is runnable on a rq into its weight. For cfs_rq, it is the
+ * aggregated such weights of all runnable and blocked sched_entities.
* 2) util_avg factors frequency scaling into the amount of time
* that a sched_entity is running on a CPU, in the range [0..SCHED_LOAD_SCALE].
* For cfs_rq, it is the aggregated such times of all runnable and
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 47ece22..86cb27c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2515,6 +2515,8 @@ static u32 __compute_runnable_contrib(u64 n)
return contrib + runnable_avg_yN_sum[n];
}
+#define scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
+
/*
* We can represent the historical contribution to runnable average as the
* coefficients of a geometric series. To do this we sub-divide our runnable
@@ -2547,9 +2549,9 @@ static __always_inline int
__update_load_avg(u64 now, int cpu, struct sched_avg *sa,
unsigned long weight, int running, struct cfs_rq *cfs_rq)
{
- u64 delta, periods;
+ u64 delta, scaled_delta, periods;
u32 contrib;
- int delta_w, decayed = 0;
+ int delta_w, scaled_delta_w, decayed = 0;
unsigned long scale_freq = arch_scale_freq_capacity(NULL, cpu);
delta = now - sa->last_update_time;
@@ -2585,13 +2587,16 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
* period and accrue it.
*/
delta_w = 1024 - delta_w;
+ scaled_delta_w = scale(delta_w, scale_freq);
if (weight) {
- sa->load_sum += weight * delta_w;
- if (cfs_rq)
- cfs_rq->runnable_load_sum += weight * delta_w;
+ sa->load_sum += weight * scaled_delta_w;
+ if (cfs_rq) {
+ cfs_rq->runnable_load_sum +=
+ weight * scaled_delta_w;
+ }
}
if (running)
- sa->util_sum += delta_w * scale_freq >> SCHED_CAPACITY_SHIFT;
+ sa->util_sum += scaled_delta_w;
delta -= delta_w;
@@ -2608,23 +2613,25 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
/* Efficiently calculate \sum (1..n_period) 1024*y^i */
contrib = __compute_runnable_contrib(periods);
+ contrib = scale(contrib, scale_freq);
if (weight) {
sa->load_sum += weight * contrib;
if (cfs_rq)
cfs_rq->runnable_load_sum += weight * contrib;
}
if (running)
- sa->util_sum += contrib * scale_freq >> SCHED_CAPACITY_SHIFT;
+ sa->util_sum += contrib;
}
/* Remainder of delta accrued against u_0` */
+ scaled_delta = scale(delta, scale_freq);
if (weight) {
- sa->load_sum += weight * delta;
+ sa->load_sum += weight * scaled_delta;
if (cfs_rq)
- cfs_rq->runnable_load_sum += weight * delta;
+ cfs_rq->runnable_load_sum += weight * scaled_delta;
}
if (running)
- sa->util_sum += delta * scale_freq >> SCHED_CAPACITY_SHIFT;
+ sa->util_sum += scaled_delta;
sa->period_contrib += delta;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists