[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-7e1a9208f6c7e66bb4e5d2ed18dfd191230f431b@git.kernel.org>
Date: Wed, 10 Jan 2018 04:20:22 -0800
From: tip-bot for Juri Lelli <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, hpa@...or.com, mingo@...nel.org,
juri.lelli@....com, peterz@...radead.org,
torvalds@...ux-foundation.org, tglx@...utronix.de,
rostedt@...dmis.org
Subject: [tip:sched/core] sched/cpufreq: Move
arch_scale_{freq,cpu}_capacity() outside of #ifdef CONFIG_SMP
Commit-ID: 7e1a9208f6c7e66bb4e5d2ed18dfd191230f431b
Gitweb: https://git.kernel.org/tip/7e1a9208f6c7e66bb4e5d2ed18dfd191230f431b
Author: Juri Lelli <juri.lelli@....com>
AuthorDate: Mon, 4 Dec 2017 11:23:24 +0100
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 10 Jan 2018 12:53:35 +0100
sched/cpufreq: Move arch_scale_{freq,cpu}_capacity() outside of #ifdef CONFIG_SMP
Currently, frequency and cpu capacity scaling is only performed on
CONFIG_SMP systems (as CFS PELT signals are only present for such
systems). However, other scheduling classes want to do freq/cpu scaling,
and for !CONFIG_SMP configurations as well.
arch_scale_freq_capacity() is useful to implement frequency scaling even
on !CONFIG_SMP platforms, so we simply move it outside CONFIG_SMP
ifdeffery.
Even if arch_scale_cpu_capacity() is not useful on !CONFIG_SMP platforms,
we make a default implementation available for such configurations anyway
to simplify scheduler code doing CPU scale invariance.
Signed-off-by: Juri Lelli <juri.lelli@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: alessio.balsini@....com
Cc: bristot@...hat.com
Cc: claudio@...dence.eu.com
Cc: dietmar.eggemann@....com
Cc: joelaf@...gle.com
Cc: juri.lelli@...hat.com
Cc: luca.abeni@...tannapisa.it
Cc: mathieu.poirier@...aro.org
Cc: morten.rasmussen@....com
Cc: patrick.bellasi@....com
Cc: rjw@...ysocki.net
Cc: tkjos@...roid.com
Cc: tommaso.cucinotta@...tannapisa.it
Cc: vincent.guittot@...aro.org
Cc: viresh.kumar@...aro.org
Link: http://lkml.kernel.org/r/20171204102325.5110-8-juri.lelli@redhat.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
include/linux/sched/topology.h | 12 ++++++------
kernel/sched/sched.h | 13 ++++++++++---
2 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index cf257c2..2634774 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -7,6 +7,12 @@
#include <linux/sched/idle.h>
/*
+ * Increase resolution of cpu_capacity calculations
+ */
+#define SCHED_CAPACITY_SHIFT SCHED_FIXEDPOINT_SHIFT
+#define SCHED_CAPACITY_SCALE (1L << SCHED_CAPACITY_SHIFT)
+
+/*
* sched-domains (multiprocessor balancing) declarations:
*/
#ifdef CONFIG_SMP
@@ -27,12 +33,6 @@
#define SD_OVERLAP 0x2000 /* sched_domains of this level overlap */
#define SD_NUMA 0x4000 /* cross-node balancing */
-/*
- * Increase resolution of cpu_capacity calculations
- */
-#define SCHED_CAPACITY_SHIFT SCHED_FIXEDPOINT_SHIFT
-#define SCHED_CAPACITY_SCALE (1L << SCHED_CAPACITY_SHIFT)
-
#ifdef CONFIG_SCHED_SMT
static inline int cpu_smt_flags(void)
{
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b710019..e122c89 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1670,9 +1670,6 @@ static inline int hrtick_enabled(struct rq *rq)
#endif /* CONFIG_SCHED_HRTICK */
-#ifdef CONFIG_SMP
-extern void sched_avg_update(struct rq *rq);
-
#ifndef arch_scale_freq_capacity
static __always_inline
unsigned long arch_scale_freq_capacity(int cpu)
@@ -1681,6 +1678,9 @@ unsigned long arch_scale_freq_capacity(int cpu)
}
#endif
+#ifdef CONFIG_SMP
+extern void sched_avg_update(struct rq *rq);
+
#ifndef arch_scale_cpu_capacity
static __always_inline
unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu)
@@ -1698,6 +1698,13 @@ static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta)
sched_avg_update(rq);
}
#else
+#ifndef arch_scale_cpu_capacity
+static __always_inline
+unsigned long arch_scale_cpu_capacity(void __always_unused *sd, int cpu)
+{
+ return SCHED_CAPACITY_SCALE;
+}
+#endif
static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { }
static inline void sched_avg_update(struct rq *rq) { }
#endif
Powered by blists - more mailing lists