[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1465842997.880350998@decadent.org.uk>
Date: Mon, 13 Jun 2016 19:36:37 +0100
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org,
"Mike Galbraith" <umgwanakikbuti@...il.com>,
"Byungchul Park" <byungchul.park@....com>,
"Thomas Gleixner" <tglx@...utronix.de>, juri.lelli@...il.com,
ktkhai@...allels.com, rostedt@...dmis.org, oleg@...hat.com,
wanpeng.li@...ux.intel.com,
"Peter Zijlstra" <peterz@...radead.org>, pang.xunlei@...aro.org
Subject: [PATCH 3.16 110/114] sched: Allow balance callbacks for
check_class_changed()
3.16.36-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra <peterz@...radead.org>
commit 4c9a4bc89a9cca8128bce67d6bc8870d6b7ee0b2 upstream.
In order to remove dropping rq->lock from the
switched_{to,from}()/prio_changed() sched_class methods, run the
balance callbacks after it.
We need to remove dropping rq->lock because its buggy,
suppose using sched_setattr()/sched_setscheduler() to change a running
task from FIFO to OTHER.
By the time we get to switched_from_rt() the task is already enqueued
on the cfs runqueues. If switched_from_rt() does pull_rt_task() and
drops rq->lock, load-balancing can come in and move our task @p to
another rq.
The subsequent switched_to_fair() still assumes @p is on @rq and bad
things will happen.
By using balance callbacks we delay the load-balancing operations
{rt,dl}x{push,pull} until we've done all the important work and the
task is fully set up.
Furthermore, the balance callbacks do not know about @p, therefore
they cannot get confused like this.
Reported-by: Mike Galbraith <umgwanakikbuti@...il.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: ktkhai@...allels.com
Cc: rostedt@...dmis.org
Cc: juri.lelli@...il.com
Cc: pang.xunlei@...aro.org
Cc: oleg@...hat.com
Cc: wanpeng.li@...ux.intel.com
Link: http://lkml.kernel.org/r/20150611124742.615343911@infradead.org
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
[Conflicts: kernel/sched/core.c]
Signed-off-by: Byungchul Park <byungchul.park@....com>
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
kernel/sched/core.c | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -999,6 +999,13 @@ inline int task_curr(const struct task_s
return cpu_curr(task_cpu(p)) == p;
}
+/*
+ * switched_from, switched_to and prio_changed must _NOT_ drop rq->lock,
+ * use the balance_callback list if you want balancing.
+ *
+ * this means any call to check_class_changed() must be followed by a call to
+ * balance_callback().
+ */
static inline void check_class_changed(struct rq *rq, struct task_struct *p,
const struct sched_class *prev_class,
int oldprio)
@@ -1500,8 +1507,12 @@ ttwu_do_wakeup(struct rq *rq, struct tas
p->state = TASK_RUNNING;
#ifdef CONFIG_SMP
- if (p->sched_class->task_woken)
+ if (p->sched_class->task_woken) {
+ /*
+ * XXX can drop rq->lock; most likely ok.
+ */
p->sched_class->task_woken(rq, p);
+ }
if (rq->idle_stamp) {
u64 delta = rq_clock(rq) - rq->idle_stamp;
@@ -3052,7 +3063,11 @@ void rt_mutex_setprio(struct task_struct
check_class_changed(rq, p, prev_class, oldprio);
out_unlock:
+ preempt_disable(); /* avoid rq from going away on us */
__task_rq_unlock(rq);
+
+ balance_callback(rq);
+ preempt_enable();
}
#endif
@@ -3575,10 +3590,17 @@ change:
}
check_class_changed(rq, p, prev_class, oldprio);
+ preempt_disable(); /* avoid rq from going away on us */
task_rq_unlock(rq, p, &flags);
rt_mutex_adjust_pi(p);
+ /*
+ * Run balance callbacks after we've adjusted the PI chain.
+ */
+ balance_callback(rq);
+ preempt_enable();
+
return 0;
}
Powered by blists - more mailing lists