[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191108170320.ckvvwqa2f45v2trv@e107158-lin>
Date: Fri, 8 Nov 2019 17:03:20 +0000
From: Qais Yousef <qais.yousef@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, vincent.guittot@...aro.org,
dietmar.eggemann@....com, juri.lelli@...hat.com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
linux-kernel@...r.kernel.org, qperret@...gle.com,
valentin.schneider@....com, ktkhai@...tuozzo.com
Subject: Re: [PATCH 1/7] sched: Fix pick_next_task() vs change pattern race
On 11/08/19 14:15, Peter Zijlstra wrote:
> Commit 67692435c411 ("sched: Rework pick_next_task() slow-path")
> inadvertly introduced a race because it changed a previously
> unexplored dependency between dropping the rq->lock and
> sched_class::put_prev_task().
>
> The comments about dropping rq->lock, in for example
> newidle_balance(), only mentions the task being current and ->on_cpu
> being set. But when we look at the 'change' pattern (in for example
> sched_setnuma()):
>
> queued = task_on_rq_queued(p); /* p->on_rq == TASK_ON_RQ_QUEUED */
> running = task_current(rq, p); /* rq->curr == p */
>
> if (queued)
> dequeue_task(...);
> if (running)
> put_prev_task(...);
>
> /* change task properties */
>
> if (queued)
> enqueue_task(...);
> if (running)
> set_next_task(...);
>
> It becomes obvious that if we do this after put_prev_task() has
> already been called on @p, things go sideways. This is exactly what
> the commit in question allows to happen when it does:
>
> prev->sched_class->put_prev_task(rq, prev, rf);
> if (!rq->nr_running)
> newidle_balance(rq, rf);
>
> The newidle_balance() call will drop rq->lock after we've called
> put_prev_task() and that allows the above 'change' pattern to
> interleave and mess up the state.
>
> Furthermore, it turns out we lost the RT-pull when we put the last DL
> task.
>
> Fix both problems by extracting the balancing from put_prev_task() and
> doing a multi-class balance() pass before put_prev_task().
>
> Fixes: 67692435c411 ("sched: Rework pick_next_task() slow-path")
> Reported-by: Quentin Perret <qperret@...gle.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
This feels more fluent and less delicate than the previous approach for sure.
Spinning the whole series in my reproducer at the minute.
Thinking ahead, can we do something to help us debug/spot this in a better way
in the future?
I'm thinking maybe something like
save_rq_state();
rq_unlock();
.
.
.
rq_lock();
assert_rq_state(); /* Bug here if something we don't expect has changed
when the lock wasn't held */
might help us verify no one has tried to change the state of the rq when the
lock wasn't held.
At least this will help us catch races like this closer to the point where
things go wrong.
We might as well annotate the functions that modify the state of the rq to help
debug the sequence of events?
Happy to carry out the work if there's anything sensible that we can actually
do.
Cheers
--
Qais Yousef
Powered by blists - more mailing lists