[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zgou6iq1.mognet@arm.com>
Date: Tue, 21 Dec 2021 16:11:34 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Dietmar Eggemann <dietmar.eggemann@....com>,
John Keeping <john@...anate.com>,
linux-rt-users@...r.kernel.org
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [RT] BUG in sched/cpupri.c
On 20/12/21 18:35, Dietmar Eggemann wrote:
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index ef8228d19382..798887f1eeff 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1895,9 +1895,17 @@ static int push_rt_task(struct rq *rq, bool pull)
> struct task_struct *push_task = NULL;
> int cpu;
>
> + if (WARN_ON_ONCE(!rt_task(rq->curr))) {
> + printk("next_task=[%s %d] rq->curr=[%s %d]\n",
> + next_task->comm, next_task->pid, rq->curr->comm, rq->curr->pid);
> + }
> +
> if (!pull || rq->push_busy)
> return 0;
>
> + if (!rt_task(rq->curr))
> + return 0;
> +
If current is a DL/stopper task, why not; if that's CFS (which IIUC is your
case), that's buggered: we shouldn't be trying to pull RT tasks when we
have queued RT tasks and a less-than-RT current, we should be rescheduling
right now.
I'm thinking this can happen via rt_mutex_setprio() when we demote an RT-boosted
CFS task (or straight up sched_setscheduler()):
check_class_changed()->switched_from_rt() doesn't trigger a resched_curr(),
so I suspect we get to the push/pull callback before getting a
resched (I actually don't see where we'd get a resched in that case other
than at the next tick).
IOW, feels like we want the below. Unfortunately I can't reproduce the
issue locally (yet), so that's untested.
---
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index fd7c4f972aaf..7d61ceec1a3b 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2467,10 +2467,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
* this is the right place to try to pull some other one
* from an overloaded CPU, if any.
*/
- if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
+ if (!task_on_rq_queued(p))
return;
- deadline_queue_pull_task(rq);
+ if (!rq->dl.dl_nr_running)
+ deadline_queue_pull_task(rq);
+ else if (task_current(rq, p) && (p->sched_class < &dl_sched_class))
+ resched_curr(rq);
}
/*
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index ef8228d19382..1ea2567612fb 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2322,10 +2322,13 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
* we may need to handle the pulling of RT tasks
* now.
*/
- if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
+ if (!task_on_rq_queued(p))
return;
- rt_queue_pull_task(rq);
+ if (!rq->rt.rt_nr_running)
+ rt_queue_pull_task(rq);
+ else if (task_current(rq, p) && (p->sched_class < &rt_sched_class))
+ resched_curr(rq);
}
void __init init_sched_rt_class(void)
Powered by blists - more mailing lists