[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200214163949.27850-4-qais.yousef@arm.com>
Date: Fri, 14 Feb 2020 16:39:49 +0000
From: Qais Yousef <qais.yousef@....com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Pavan Kondeti <pkondeti@...eaurora.org>,
Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
linux-kernel@...r.kernel.org, Qais Yousef <qais.yousef@....com>
Subject: [PATCH 3/3] sched/rt: fix pushing unfit tasks to a better CPU
If a task was running on unfit CPU we could ignore migrating if the
priority level of the new fitting CPU is the *same* as the unfit one.
Add an extra check to select_task_rq_rt() to allow the push in case:
* old_cpu.highest_priority == new_cpu.highest_priority
* task_fits(p, new_cpu)
Signed-off-by: Qais Yousef <qais.yousef@....com>
---
I was seeing some delays in migrating a task to a big CPU sometimes although it
was free, and I think this fixes it.
TBH, I fail to see how the check of
p->prio < cpu_rq(target)->rt.highest_prio.curr
is necessary as find_lowest_rq() surely implies the above condition by
definition?
Unless we're fighting a race condition here where the rt_rq priority has
changed between the time we selected the lowest_rq and taking the decision to
migrate, then this makes sense.
kernel/sched/rt.c | 34 +++++++++++++++++++++++++---------
1 file changed, 25 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 0c8bac134d3a..5ea235f2cfe8 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1430,7 +1430,7 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags)
{
struct task_struct *curr;
struct rq *rq;
- bool test;
+ bool test, fit;
/* For anything but wake ups, just return the task_cpu */
if (sd_flag != SD_BALANCE_WAKE && sd_flag != SD_BALANCE_FORK)
@@ -1471,16 +1471,32 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags)
unlikely(rt_task(curr)) &&
(curr->nr_cpus_allowed < 2 || curr->prio <= p->prio);
- if (test || !rt_task_fits_capacity(p, cpu)) {
+ fit = rt_task_fits_capacity(p, cpu);
+
+ if (test || !fit) {
int target = find_lowest_rq(p);
- /*
- * Don't bother moving it if the destination CPU is
- * not running a lower priority task.
- */
- if (target != -1 &&
- p->prio < cpu_rq(target)->rt.highest_prio.curr)
- cpu = target;
+ if (target != -1) {
+ /*
+ * Don't bother moving it if the destination CPU is
+ * not running a lower priority task.
+ */
+ if (p->prio < cpu_rq(target)->rt.highest_prio.curr) {
+
+ cpu = target;
+
+ } else if (p->prio == cpu_rq(target)->rt.highest_prio.curr) {
+
+ /*
+ * If the priority is the same and the new CPU
+ * is a better fit, then move, otherwise don't
+ * bother here either.
+ */
+ fit = rt_task_fits_capacity(p, target);
+ if (fit)
+ cpu = target;
+ }
+ }
}
rcu_read_unlock();
--
2.17.1
Powered by blists - more mailing lists