[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-f07333bf6ee66d9b49286cec4371cf375e745b7a@git.kernel.org>
Date: Wed, 26 Jan 2011 12:11:28 GMT
From: tip-bot for Paul Turner <pjt@...gle.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, hpa@...or.com, mingo@...hat.com,
a.p.zijlstra@...llo.nl, pjt@...gle.com, tglx@...utronix.de,
mingo@...e.hu
Subject: [tip:sched/core] sched: Avoid expensive initial update_cfs_load()
Commit-ID: f07333bf6ee66d9b49286cec4371cf375e745b7a
Gitweb: http://git.kernel.org/tip/f07333bf6ee66d9b49286cec4371cf375e745b7a
Author: Paul Turner <pjt@...gle.com>
AuthorDate: Fri, 21 Jan 2011 20:45:03 -0800
Committer: Ingo Molnar <mingo@...e.hu>
CommitDate: Wed, 26 Jan 2011 12:33:19 +0100
sched: Avoid expensive initial update_cfs_load()
Since cfs->{load_stamp,load_last} are zero-initalized the initial load update
will consider the delta to be 'since the beginning of time'.
This results in a lot of pointless divisions to bring this large period to be
within the sysctl_sched_shares_window.
Fix this by initializing load_stamp to be 1 at cfs_rq initialization, this
allows for an initial load_stamp > load_last which then lets standard idle
truncation proceed.
We avoid spinning (and slightly improve consistency) by fixing delta to be
[period - 1] in this path resulting in a slightly more predictable shares ramp.
(Previously the amount of idle time preserved by the overflow would range between
[period/2,period-1].)
Signed-off-by: Paul Turner <pjt@...gle.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
LKML-Reference: <20110122044852.102126037@...gle.com>
Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
kernel/sched.c | 2 ++
kernel/sched_fair.c | 1 +
2 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index e0fa3ff..6820b5b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -7796,6 +7796,8 @@ static void init_cfs_rq(struct cfs_rq *cfs_rq, struct rq *rq)
INIT_LIST_HEAD(&cfs_rq->tasks);
#ifdef CONFIG_FAIR_GROUP_SCHED
cfs_rq->rq = rq;
+ /* allow initial update_cfs_load() to truncate */
+ cfs_rq->load_stamp = 1;
#endif
cfs_rq->min_vruntime = (u64)(-(1LL << 20));
}
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 0c550c8..4cbc912 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -733,6 +733,7 @@ static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
now - cfs_rq->load_last > 4 * period) {
cfs_rq->load_period = 0;
cfs_rq->load_avg = 0;
+ delta = period - 1;
}
cfs_rq->load_stamp = now;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists