[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F95769E.70303@gmail.com>
Date: Mon, 23 Apr 2012 17:34:54 +0200
From: Juri Lelli <juri.lelli@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: tglx@...utronix.de, mingo@...hat.com, rostedt@...dmis.org,
cfriesen@...tel.com, oleg@...hat.com, fweisbec@...il.com,
darren@...art.com, johan.eker@...csson.com, p.faure@...tech.ch,
linux-kernel@...r.kernel.org, claudio@...dence.eu.com,
michael@...rulasolutions.com, fchecconi@...il.com,
tommaso.cucinotta@...up.it, nicola.manica@...i.unitn.it,
luca.abeni@...tn.it, dhaval.giani@...il.com, hgu1972@...il.com,
paulmck@...ux.vnet.ibm.com, raistlin@...ux.it,
insop.song@...csson.com, liming.wang@...driver.com
Subject: Re: [PATCH 05/16] sched: SCHED_DEADLINE policy implementation.
On 04/23/2012 04:25 PM, Peter Zijlstra wrote:
> On Fri, 2012-04-06 at 09:14 +0200, Juri Lelli wrote:
>> +/*
>> + * This is the bandwidth enforcement timer callback. If here, we know
>> + * a task is not on its dl_rq, since the fact that the timer was running
>> + * means the task is throttled and needs a runtime replenishment.
>> + *
>> + * However, what we actually do depends on the fact the task is active,
>> + * (it is on its rq) or has been removed from there by a call to
>> + * dequeue_task_dl(). In the former case we must issue the runtime
>> + * replenishment and add the task back to the dl_rq; in the latter, we just
>> + * do nothing but clearing dl_throttled, so that runtime and deadline
>> + * updating (and the queueing back to dl_rq) will be done by the
>> + * next call to enqueue_task_dl().
>
> OK, so that comment isn't entirely clear to me, how can that timer still
> be active when the task isn't? You start the timer when you throttle it,
> at that point it cannot in fact dequeue itself anymore.
>
> The only possibility I see is the one mentioned with the dl_task() check
> below, that someone else called sched_setscheduler() on it.
>
Ok, I was also stuck at this point when I first reviewed v3.
Then I convinced myself that, even if probably always true,
the p->on_rq check would prevent weird situations like for
example: by the time I block on a mutex, go to sleep or whatever,
I am throttled, then the dl_timer fires and I'm still !on_rq.
But I didn't see this happening ever actually...
>> + */
>> +static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
>> +{
>> + unsigned long flags;
>> + struct sched_dl_entity *dl_se = container_of(timer,
>> + struct sched_dl_entity,
>> + dl_timer);
>> + struct task_struct *p = dl_task_of(dl_se);
>> + struct rq *rq = task_rq_lock(p,&flags);
>> +
>> + /*
>> + * We need to take care of a possible races here. In fact, the
>> + * task might have changed its scheduling policy to something
>> + * different from SCHED_DEADLINE (through sched_setscheduler()).
>> + */
>> + if (!dl_task(p))
>> + goto unlock;
>> +
>> + dl_se->dl_throttled = 0;
>> + if (p->on_rq) {
>> + enqueue_task_dl(rq, p, ENQUEUE_REPLENISH);
>> + if (task_has_dl_policy(rq->curr))
>> + check_preempt_curr_dl(rq, p, 0);
>> + else
>> + resched_task(rq->curr);
>> + }
>
> So I can't see how that cannot be true.
>
>> +unlock:
>> + task_rq_unlock(rq, p,&flags);
>> +
>> + return HRTIMER_NORESTART;
>> +}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists