[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <6282396.VVEdgVYxO3@vostro.rjw.lan>
Date: Fri, 06 May 2016 14:58:43 +0200
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Linux PM list <linux-pm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Steve Muckle <steve.muckle@...aro.org>,
Ingo Molnar <mingo@...nel.org>
Subject: [PATCH] sched/fair: Invoke cpufreq hooks for CONFIG_SMP unset
From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
Commit 34e2c555f3e1 (cpufreq: Add mechanism for registering utilization
update callbacks) overlooked the fact that update_load_avg(), where
CFS invokes cpufreq utilization update hooks, becomes an empty stub for
CONFIG_SMP unset. In consequence, if CONFIG_SMP is not set, cpufreq
governors are never invoked from CFS and they do not have a chance to
evaluate CPU performace levels and update them often enough. Needless
to say, things don't work as expected then.
Fix the problem by making the !CONFIG_SMP stub of update_load_avg()
invoke cpufreq update hooks too.
Fixes: 34e2c555f3e1 (cpufreq: Add mechanism for registering utilization update callbacks)
Reported-by: Steve Muckle <steve.muckle@...aro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
---
This needs to go into v4.6.
---
kernel/sched/fair.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
Index: linux-pm/kernel/sched/fair.c
===================================================================
--- linux-pm.orig/kernel/sched/fair.c
+++ linux-pm/kernel/sched/fair.c
@@ -3030,7 +3030,14 @@ static int idle_balance(struct rq *this_
#else /* CONFIG_SMP */
-static inline void update_load_avg(struct sched_entity *se, int update_tg) {}
+static inline void update_load_avg(struct sched_entity *se, int not_used)
+{
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
+ struct rq *rq = rq_of(cfs_rq);
+
+ cpufreq_trigger_update(rq_clock(rq));
+}
+
static inline void
enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
static inline void
Powered by blists - more mailing lists