[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <158835732955.8414.10159311341010885250.tip-bot2@tip-bot2>
Date: Fri, 01 May 2020 18:22:09 -0000
From: "tip-bot2 for Chen Yu" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: kbuild test robot <lkp@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
Chen Yu <yu.c.chen@...el.com>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [tip: sched/core] sched: Make newidle_balance() static again
The following commit has been merged into the sched/core branch of tip:
Commit-ID: d91cecc156620ec75d94c55369509c807c3d07e6
Gitweb: https://git.kernel.org/tip/d91cecc156620ec75d94c55369509c807c3d07e6
Author: Chen Yu <yu.c.chen@...el.com>
AuthorDate: Tue, 21 Apr 2020 18:50:34 +08:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Thu, 30 Apr 2020 20:14:40 +02:00
sched: Make newidle_balance() static again
After Commit 6e2df0581f56 ("sched: Fix pick_next_task() vs 'change'
pattern race"), there is no need to expose newidle_balance() as it
is only used within fair.c file. Change this function back to static again.
No functional change.
Reported-by: kbuild test robot <lkp@...el.com>
Suggested-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Chen Yu <yu.c.chen@...el.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/83cd3030b031ca5d646cd5e225be10e7a0fdd8f5.1587464698.git.yu.c.chen@intel.com
---
kernel/sched/fair.c | 6 ++++--
kernel/sched/sched.h | 4 ----
2 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4b959c0..c0216ef 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3873,6 +3873,8 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
return cfs_rq->avg.load_avg;
}
+static int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
+
static inline unsigned long task_util(struct task_struct *p)
{
return READ_ONCE(p->se.avg.util_avg);
@@ -4054,7 +4056,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
static inline void
detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
-static inline int idle_balance(struct rq *rq, struct rq_flags *rf)
+static inline int newidle_balance(struct rq *rq, struct rq_flags *rf)
{
return 0;
}
@@ -10414,7 +10416,7 @@ static inline void nohz_newidle_balance(struct rq *this_rq) { }
* 0 - failed, no new tasks
* > 0 - success, new (fair) tasks present
*/
-int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
+static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
{
unsigned long next_balance = jiffies + HZ;
int this_cpu = this_rq->cpu;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 7198683..978c6fa 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1503,14 +1503,10 @@ static inline void unregister_sched_domain_sysctl(void)
}
#endif
-extern int newidle_balance(struct rq *this_rq, struct rq_flags *rf);
-
#else
static inline void sched_ttwu_pending(void) { }
-static inline int newidle_balance(struct rq *this_rq, struct rq_flags *rf) { return 0; }
-
#endif /* CONFIG_SMP */
#include "stats.h"
Powered by blists - more mailing lists