[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190815145107.5318-5-valentin.schneider@arm.com>
Date: Thu, 15 Aug 2019 15:51:07 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...nel.org, peterz@...radead.org, vincent.guittot@...aro.org,
tglx@...utronix.de, qais.yousef@....com
Subject: [PATCH v2 4/4] sched/fair: Prevent active LB from preempting higher sched classes
The CFS load balancer can cause the cpu_stopper to run a function to
try and steal a remote rq's running task. However, it so happens
that while only CFS tasks will ever be migrated by that function, we
can end up preempting higher sched class tasks, since it is executed
by the cpu_stopper.
This can potentially occur whenever a rq's running task is > CFS but
the rq has runnable CFS tasks.
Check the sched class of the remote rq's running task after we've
grabbed its lock. If it's CFS, carry on, otherwise run
detach_one_task() locally since we don't need the cpu_stopper (that
!CFS task is doing the exact same thing).
Signed-off-by: Valentin Schneider <valentin.schneider@....com>
---
kernel/sched/fair.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8f5f6cad5008..bf4f7f43252f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8759,6 +8759,7 @@ static inline enum alb_status active_load_balance(struct lb_env *env)
enum alb_status status = cancelled;
struct sched_domain *sd = env->sd;
struct rq *busiest = env->src_rq;
+ struct task_struct *p = NULL;
unsigned long flags;
schedstat_inc(sd->lb_failed[env->idle]);
@@ -8780,6 +8781,16 @@ static inline enum alb_status active_load_balance(struct lb_env *env)
if (busiest->cfs.h_nr_running < 1)
goto unlock;
+ /*
+ * If busiest->curr isn't CFS, then there's no point in running the
+ * cpu_stopper. We can try to nab one CFS task though, since they're
+ * all ready to be plucked.
+ */
+ if (busiest->curr->sched_class != &fair_sched_class) {
+ p = detach_one_task(env);
+ goto unlock;
+ }
+
/*
* Don't kick the active_load_balance_cpu_stop, if the curr task on
* busiest CPU can't be moved to dst_cpu:
@@ -8803,7 +8814,9 @@ static inline enum alb_status active_load_balance(struct lb_env *env)
unlock:
raw_spin_unlock_irqrestore(&busiest->lock, flags);
- if (status == started)
+ if (p)
+ attach_one_task(env->dst_rq, p);
+ else if (status == started)
stop_one_cpu_nowait(cpu_of(busiest),
active_load_balance_cpu_stop, busiest,
&busiest->active_balance_work);
--
2.22.0
Powered by blists - more mailing lists