[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-1a99ae3f00d3c7c7885ee529ac9a874b19caa0cf@git.kernel.org>
Date: Fri, 3 Jun 2016 03:47:33 -0700
From: tip-bot for Xunlei Pang <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: peterz@...radead.org, rostedt@...dmis.org, mingo@...nel.org,
torvalds@...ux-foundation.org, hpa@...or.com,
linux-kernel@...r.kernel.org, tglx@...utronix.de,
xlpang@...hat.com, juri.lelli@....com, efault@....de
Subject: [tip:sched/core] sched/fair: Fix the wrong throttled clock time for
cfs_rq_clock_task()
Commit-ID: 1a99ae3f00d3c7c7885ee529ac9a874b19caa0cf
Gitweb: http://git.kernel.org/tip/1a99ae3f00d3c7c7885ee529ac9a874b19caa0cf
Author: Xunlei Pang <xlpang@...hat.com>
AuthorDate: Tue, 10 May 2016 21:03:18 +0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Fri, 3 Jun 2016 09:18:56 +0200
sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task()
Two minor fixes for cfs_rq_clock_task():
1) If cfs_rq is currently being throttled, we need to subtract the cfs
throttled clock time.
2) Make "throttled_clock_task_time" update SMP unrelated. Now UP cases
need it as well.
Signed-off-by: Xunlei Pang <xlpang@...hat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/1462885398-14724-1-git-send-email-xlpang@redhat.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 218f8e8..1e87bb6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3688,7 +3688,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
{
if (unlikely(cfs_rq->throttle_count))
- return cfs_rq->throttled_clock_task;
+ return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
return rq_clock_task(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
}
@@ -3826,13 +3826,11 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
cfs_rq->throttle_count--;
-#ifdef CONFIG_SMP
if (!cfs_rq->throttle_count) {
/* adjust cfs_rq_clock_task() */
cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
cfs_rq->throttled_clock_task;
}
-#endif
return 0;
}
Powered by blists - more mailing lists