[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-094f469172e00d6ab0a3130b0e01c83b3cf3a98d@git.kernel.org>
Date: Fri, 24 Jun 2016 01:59:34 -0700
From: tip-bot for Konstantin Khlebnikov <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: mingo@...nel.org, tglx@...utronix.de, hpa@...or.com,
khlebnikov@...dex-team.ru, peterz@...radead.org,
torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org
Subject: [tip:sched/urgent] sched/fair: Initialize throttle_count for new
task-groups lazily
Commit-ID: 094f469172e00d6ab0a3130b0e01c83b3cf3a98d
Gitweb: http://git.kernel.org/tip/094f469172e00d6ab0a3130b0e01c83b3cf3a98d
Author: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
AuthorDate: Thu, 16 Jun 2016 15:57:01 +0300
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Fri, 24 Jun 2016 08:26:44 +0200
sched/fair: Initialize throttle_count for new task-groups lazily
Cgroup created inside throttled group must inherit current throttle_count.
Broken throttle_count allows to nominate throttled entries as a next buddy,
later this leads to null pointer dereference in pick_next_task_fair().
This patch initialize cfs_rq->throttle_count at first enqueue: laziness
allows to skip locking all rq at group creation. Lazy approach also allows
to skip full sub-tree scan at throttling hierarchy (not in this patch).
Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: bsegall@...gle.com
Link: http://lkml.kernel.org/r/146608182119.21870.8439834428248129633.stgit@buzz
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 20 ++++++++++++++++++++
kernel/sched/sched.h | 2 +-
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2ae68f0..8c5d8c0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4202,6 +4202,26 @@ static void check_enqueue_throttle(struct cfs_rq *cfs_rq)
if (!cfs_bandwidth_used())
return;
+ /* Synchronize hierarchical throttle counter: */
+ if (unlikely(!cfs_rq->throttle_uptodate)) {
+ struct rq *rq = rq_of(cfs_rq);
+ struct cfs_rq *pcfs_rq;
+ struct task_group *tg;
+
+ cfs_rq->throttle_uptodate = 1;
+
+ /* Get closest up-to-date node, because leaves go first: */
+ for (tg = cfs_rq->tg->parent; tg; tg = tg->parent) {
+ pcfs_rq = tg->cfs_rq[cpu_of(rq)];
+ if (pcfs_rq->throttle_uptodate)
+ break;
+ }
+ if (tg) {
+ cfs_rq->throttle_count = pcfs_rq->throttle_count;
+ cfs_rq->throttled_clock_task = rq_clock_task(rq);
+ }
+ }
+
/* an active group must be handled by the update_curr()->put() path */
if (!cfs_rq->runtime_enabled || cfs_rq->curr)
return;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 72f1f30..7cbeb92 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -437,7 +437,7 @@ struct cfs_rq {
u64 throttled_clock, throttled_clock_task;
u64 throttled_clock_task_time;
- int throttled, throttle_count;
+ int throttled, throttle_count, throttle_uptodate;
struct list_head throttled_list;
#endif /* CONFIG_CFS_BANDWIDTH */
#endif /* CONFIG_FAIR_GROUP_SCHED */
Powered by blists - more mailing lists