[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141105162531.GV3337@twins.programming.kicks-ass.net>
Date: Wed, 5 Nov 2014 17:25:31 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Kirill Tkhai <ktkhai@...allels.com>
Cc: Wanpeng Li <wanpeng.li@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@....com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2] sched/deadline: support dl task migration during
cpu hotplug
On Wed, Nov 05, 2014 at 04:14:00PM +0300, Kirill Tkhai wrote:
> > @@ -538,6 +539,39 @@ again:
> > update_rq_clock(rq);
> > dl_se->dl_throttled = 0;
> > dl_se->dl_yielded = 0;
> > +
> > + /*
> > + * So if we find that the rq the task was on is no longer
> > + * available, we need to select a new rq.
> > + */
> > + if (!rq->online) {
> > + struct rq *later_rq = NULL;
> > +
> > + /* We will release rq lock */
> > + get_task_struct(p);
No need for this, due to task_dead_dl() -> hrtimer_cancel() this task
cannot go away while the timer callback is running.
> > + raw_spin_unlock(&rq->lock);
> > +
> > + later_rq = find_lock_later_rq(p, rq);
> > +
> > + if (!later_rq) {
> > + put_task_struct(p);
> > + goto out;
> > + }
This is wrong I think, we _must_ migrate the task, if we let it reside
on this offline rq it will never come back to us.
find_lock_later_rq() will fail for tasks that aren't currently eligible
to run. You could either try and change/parameterize it to return the
latest rq in that case, or just punt and pick any online cpu.
> isn't this too complicated?
>
> Can't we simply queue throttled tasks in rq_offline_dl() (without clearing
> of dl_throttled() status)? migrate_tasks() will do the migration right.
We can't find these tasks, we'd have to add extra lists etc. And it
seems consistent with the normal ttwu thing, which migrates tasks when
they wake up.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists