[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-41e0d37f7ac81297c07ba311e4ad39465b8c8295@git.kernel.org>
Date: Sat, 23 Apr 2016 05:57:58 -0700
From: tip-bot for Steve Muckle <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: vincent.guittot@...aro.org, mingo@...nel.org, tglx@...utronix.de,
Juri.Lelli@....com, peterz@...radead.org, mturquette@...libre.com,
efault@....de, rafael@...nel.org, linux-kernel@...r.kernel.org,
patrick.bellasi@....com, morten.rasmussen@....com,
dietmar.eggemann@....com, smuckle@...aro.org,
steve.muckle@...aro.org, hpa@...or.com
Subject: [tip:sched/core] sched/fair: Do not call cpufreq hook unless util
changed
Commit-ID: 41e0d37f7ac81297c07ba311e4ad39465b8c8295
Gitweb: http://git.kernel.org/tip/41e0d37f7ac81297c07ba311e4ad39465b8c8295
Author: Steve Muckle <steve.muckle@...aro.org>
AuthorDate: Mon, 21 Mar 2016 17:21:08 -0700
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Sat, 23 Apr 2016 14:20:36 +0200
sched/fair: Do not call cpufreq hook unless util changed
There's no reason to call the cpufreq hook if the root cfs_rq
utilization has not been modified.
Signed-off-by: Steve Muckle <smuckle@...aro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Juri Lelli <Juri.Lelli@....com>
Cc: Michael Turquette <mturquette@...libre.com>
Cc: Mike Galbraith <efault@....de>
Cc: Morten Rasmussen <morten.rasmussen@....com>
Cc: Patrick Bellasi <patrick.bellasi@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Rafael J. Wysocki <rafael@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Link: http://lkml.kernel.org/r/1458606068-7476-2-git-send-email-smuckle@linaro.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6df80d4..8155281 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2879,20 +2879,21 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
{
struct sched_avg *sa = &cfs_rq->avg;
struct rq *rq = rq_of(cfs_rq);
- int decayed, removed = 0;
+ int decayed, removed_load = 0, removed_util = 0;
int cpu = cpu_of(rq);
if (atomic_long_read(&cfs_rq->removed_load_avg)) {
s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
sa->load_avg = max_t(long, sa->load_avg - r, 0);
sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0);
- removed = 1;
+ removed_load = 1;
}
if (atomic_long_read(&cfs_rq->removed_util_avg)) {
long r = atomic_long_xchg(&cfs_rq->removed_util_avg, 0);
sa->util_avg = max_t(long, sa->util_avg - r, 0);
sa->util_sum = max_t(s32, sa->util_sum - r * LOAD_AVG_MAX, 0);
+ removed_util = 1;
}
decayed = __update_load_avg(now, cpu, sa,
@@ -2903,7 +2904,8 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
cfs_rq->load_last_update_time_copy = sa->last_update_time;
#endif
- if (cpu == smp_processor_id() && &rq->cfs == cfs_rq) {
+ if (cpu == smp_processor_id() && &rq->cfs == cfs_rq &&
+ (decayed || removed_util)) {
unsigned long max = rq->cpu_capacity_orig;
/*
@@ -2926,7 +2928,7 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
min(sa->util_avg, max), max);
}
- return decayed || removed;
+ return decayed || removed_load;
}
/* Update task and its cfs_rq load average */
Powered by blists - more mailing lists