[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1427065403-19429-1-git-send-email-wanpeng.li@linux.intel.com>
Date: Mon, 23 Mar 2015 07:03:23 +0800
From: Wanpeng Li <wanpeng.li@...ux.intel.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@....com>, linux-kernel@...r.kernel.org,
Wanpeng Li <wanpeng.li@...ux.intel.com>
Subject: [PATCH v12] sched/deadline: support dl task migration during cpu hotplug
I observe that dl task can't be migrated to other cpus during cpu hotplug,
in addition, task may/may not be running again if cpu is added back. The
root cause which I found is that dl task will be throtted and removed from
dl rq after comsuming all budget, which leads to stop task can't pick it up
from dl rq and migrate to other cpus during hotplug.
The method to reproduce:
schedtool -E -t 50000:100000 -e ./test
Actually test is just a simple for loop. Then observe which cpu the test
task is on.
echo 0 > /sys/devices/system/cpu/cpuN/online
This patch adds the dl task migration during cpu hotplug by finding a most
suitable later deadline rq after dl timer fire if current rq is offline,
if fail to find a suitable later deadline rq then fallback to any eligible
online cpu in order that the deadline task will come back to us, and the
push/pull mechanism should then move it around properly.
Suggested-and-acked-by: Juri Lelli <juri.lelli@....com>
Signed-off-by: Wanpeng Li <wanpeng.li@...ux.intel.com>
---
v11 -> v12:
* s/WARN_ON/BUG_ON
v10 -> v11:
* fix codes comments
* tsk_cpus_allowed(p) shouldn't be on separate lines
* introduce a helper function to fold dl task migration during cpu hotplug support
v9 -> v10:
* fix the "WARNING: line over 80 characters"
* handle no admission control
v8 -> v9:
* align tsk_cpus_allowed(p) to cpu_active_mask
* add WARN_ON(1)
* don't resched_curr if later_rq come from the cpumask_any_and()
v7 -> v8:
* remove rd->span related modification since Pang's commit 16b269436b72
(sched/deadline: Modify cpudl::free_cpus to reflect rd->online) merged
upstream, which Juri pointed out can handle the exclusive cpusets.
* rebase
v6 -> v7:
* rebase
v5 -> v6:
* add double_lock_balance in the fallback path
v4 -> v5:
* remove raw_spin_unlock(&rq->lock)
* cleanup codes, spotted by Peterz
* cleanup patch description
v3 -> v4:
* use tsk_cpus_allowed wrapper
* fix compile error
v2 -> v3:
* don't get_task_struct
* if cannot preempt any rq, fallback to pick any online cpus
* use cpu_active_mask as original later_mask if cpu is offline
v1 -> v2:
* push the task to another cpu in dl_task_timer() if rq is offline.
kernel/sched/deadline.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 57 insertions(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 24c18dc..97a3c68 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -492,6 +492,54 @@ static int start_dl_timer(struct sched_dl_entity *dl_se, bool boosted)
return hrtimer_active(&dl_se->dl_timer);
}
+static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq);
+
+static void dl_task_migration(struct rq *rq, struct task_struct *p)
+{
+ struct rq *later_rq = NULL;
+ bool fallback = false;
+
+ later_rq = find_lock_later_rq(p, rq);
+
+ if (!later_rq) {
+ int cpu;
+
+ /*
+ * If we cannot preempt any rq, fall back to pick any
+ * online cpu.
+ */
+ fallback = true;
+ cpu = cpumask_any_and(cpu_active_mask, tsk_cpus_allowed(p));
+ if (cpu >= nr_cpu_ids) {
+ if (dl_bandwidth_enabled()) {
+ /*
+ * Fail to find any suitable cpu.
+ * The task will never come back!
+ */
+ BUG_ON(1);
+ return;
+ }
+ /*
+ * If admission control is disabled we
+ * try a little harder to let the task
+ * run.
+ */
+ cpu = cpumask_any(cpu_active_mask);
+ }
+ later_rq = cpu_rq(cpu);
+ double_lock_balance(rq, later_rq);
+ }
+
+ deactivate_task(rq, p, 0);
+ set_task_cpu(p, later_rq->cpu);
+ activate_task(later_rq, p, ENQUEUE_REPLENISH);
+
+ if (!fallback)
+ resched_curr(later_rq);
+
+ double_unlock_balance(rq, later_rq);
+}
+
/*
* This is the bandwidth enforcement timer callback. If here, we know
* a task is not on its dl_rq, since the fact that the timer was running
@@ -537,6 +585,15 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
update_rq_clock(rq);
/*
+ * So if we find that the rq the task was on is no longer
+ * available, we need to select a new rq.
+ */
+ if (unlikely(!rq->online)) {
+ dl_task_migration(rq, p);
+ goto unlock;
+ }
+
+ /*
* If the throttle happened during sched-out; like:
*
* schedule()
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists