[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-540247fb5ddf6d2364f90387fa1f8f428d15e683@git.kernel.org>
Date: Mon, 3 Aug 2015 10:11:22 -0700
From: tip-bot for Yuyang Du <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: yuyang.du@...el.com, mingo@...nel.org,
linux-kernel@...r.kernel.org, hpa@...or.com, peterz@...radead.org,
torvalds@...ux-foundation.org, tglx@...utronix.de, efault@....de
Subject: [tip:sched/core] sched/fair: Init cfs_rq'
s sched_entity load average
Commit-ID: 540247fb5ddf6d2364f90387fa1f8f428d15e683
Gitweb: http://git.kernel.org/tip/540247fb5ddf6d2364f90387fa1f8f428d15e683
Author: Yuyang Du <yuyang.du@...el.com>
AuthorDate: Wed, 15 Jul 2015 08:04:39 +0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 3 Aug 2015 12:24:29 +0200
sched/fair: Init cfs_rq's sched_entity load average
The runnable load and utilization averages of cfs_rq's sched_entity
were not initiated. Like done to a task, give new cfs_rq' sched_entity
start values to heavy its load in infant time.
Signed-off-by: Yuyang Du <yuyang.du@...el.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: arjan@...ux.intel.com
Cc: bsegall@...gle.com
Cc: dietmar.eggemann@....com
Cc: fengguang.wu@...el.com
Cc: len.brown@...el.com
Cc: morten.rasmussen@....com
Cc: pjt@...gle.com
Cc: rafael.j.wysocki@...el.com
Cc: umgwanakikbuti@...il.com
Cc: vincent.guittot@...aro.org
Link: http://lkml.kernel.org/r/1436918682-4971-5-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/core.c | 2 +-
kernel/sched/fair.c | 11 ++++++-----
kernel/sched/sched.h | 2 +-
3 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3981526..5ca9ae0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2304,7 +2304,7 @@ void wake_up_new_task(struct task_struct *p)
#endif
/* Initialize new task's runnable average */
- init_task_runnable_average(p);
+ init_entity_runnable_average(&p->se);
rq = __task_rq_lock(p);
activate_task(rq, p, 0);
p->on_rq = TASK_ON_RQ_QUEUED;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e4b80c6..f636db0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -667,10 +667,10 @@ static unsigned long task_h_load(struct task_struct *p);
#define LOAD_AVG_MAX 47742 /* maximum possible load avg */
#define LOAD_AVG_MAX_N 345 /* number of full periods to produce LOAD_MAX_AVG */
-/* Give new task start runnable values to heavy its load in infant time */
-void init_task_runnable_average(struct task_struct *p)
+/* Give new sched_entity start runnable values to heavy its load in infant time */
+void init_entity_runnable_average(struct sched_entity *se)
{
- struct sched_avg *sa = &p->se.avg;
+ struct sched_avg *sa = &se->avg;
sa->last_update_time = 0;
/*
@@ -679,14 +679,14 @@ void init_task_runnable_average(struct task_struct *p)
* will definitely be update (after enqueue).
*/
sa->period_contrib = 1023;
- sa->load_avg = scale_load_down(p->se.load.weight);
+ sa->load_avg = scale_load_down(se->load.weight);
sa->load_sum = sa->load_avg * LOAD_AVG_MAX;
sa->util_avg = scale_load_down(SCHED_LOAD_SCALE);
sa->util_sum = LOAD_AVG_MAX;
/* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
}
#else
-void init_task_runnable_average(struct task_struct *p)
+void init_entity_runnable_average(struct sched_entity *se)
{
}
#endif
@@ -8029,6 +8029,7 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
init_cfs_rq(cfs_rq);
init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]);
+ init_entity_runnable_average(se);
}
return 1;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index dcde941..4d139e0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1307,7 +1307,7 @@ extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
unsigned long to_ratio(u64 period, u64 runtime);
-extern void init_task_runnable_average(struct task_struct *p);
+extern void init_entity_runnable_average(struct sched_entity *se);
static inline void add_nr_running(struct rq *rq, unsigned count)
{
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists