[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-b8922125e4790fa237a8a4204562ecf457ef54bb@git.kernel.org>
Date: Wed, 10 Aug 2016 11:00:30 -0700
From: tip-bot for Xunlei Pang <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: torvalds@...ux-foundation.org, tglx@...utronix.de,
khlebnikov@...dex-team.ru, linux-kernel@...r.kernel.org,
hpa@...or.com, mingo@...nel.org, xlpang@...hat.com,
peterz@...radead.org
Subject: [tip:sched/core] sched/fair: Fix typo in sync_throttle()
Commit-ID: b8922125e4790fa237a8a4204562ecf457ef54bb
Gitweb: http://git.kernel.org/tip/b8922125e4790fa237a8a4204562ecf457ef54bb
Author: Xunlei Pang <xlpang@...hat.com>
AuthorDate: Sat, 9 Jul 2016 15:54:22 +0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 10 Aug 2016 13:32:55 +0200
sched/fair: Fix typo in sync_throttle()
We should update cfs_rq->throttled_clock_task, not
pcfs_rq->throttle_clock_task.
The effects of this bug was probably occasionally erratic
group scheduling, particularly in cgroups-intense workloads.
Signed-off-by: Xunlei Pang <xlpang@...hat.com>
[ Added changelog. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Fixes: 55e16d30bd99 ("sched/fair: Rework throttle_count sync")
Link: http://lkml.kernel.org/r/1468050862-18864-1-git-send-email-xlpang@redhat.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4088eed..039de34 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4269,7 +4269,7 @@ static void sync_throttle(struct task_group *tg, int cpu)
pcfs_rq = tg->parent->cfs_rq[cpu];
cfs_rq->throttle_count = pcfs_rq->throttle_count;
- pcfs_rq->throttled_clock_task = rq_clock_task(cpu_rq(cpu));
+ cfs_rq->throttled_clock_task = rq_clock_task(cpu_rq(cpu));
}
/* conditionally throttle active cfs_rq's from put_prev_entity() */
Powered by blists - more mailing lists