[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F957723.2000000@gmail.com>
Date: Mon, 23 Apr 2012 17:37:07 +0200
From: Juri Lelli <juri.lelli@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: tglx@...utronix.de, mingo@...hat.com, rostedt@...dmis.org,
cfriesen@...tel.com, oleg@...hat.com, fweisbec@...il.com,
darren@...art.com, johan.eker@...csson.com, p.faure@...tech.ch,
linux-kernel@...r.kernel.org, claudio@...dence.eu.com,
michael@...rulasolutions.com, fchecconi@...il.com,
tommaso.cucinotta@...up.it, nicola.manica@...i.unitn.it,
luca.abeni@...tn.it, dhaval.giani@...il.com, hgu1972@...il.com,
paulmck@...ux.vnet.ibm.com, raistlin@...ux.it,
insop.song@...csson.com, liming.wang@...driver.com
Subject: Re: [PATCH 05/16] sched: SCHED_DEADLINE policy implementation.
On 04/23/2012 05:15 PM, Peter Zijlstra wrote:
> On Fri, 2012-04-06 at 09:14 +0200, Juri Lelli wrote:
>> +static
>> +int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity *dl_se)
>> +{
>> + int dmiss = dl_time_before(dl_se->deadline, rq->clock);
>> + int rorun = dl_se->runtime<= 0;
>> +
>> + if (!rorun&& !dmiss)
>> + return 0;
>> +
>> + /*
>> + * If we are beyond our current deadline and we are still
>> + * executing, then we have already used some of the runtime of
>> + * the next instance. Thus, if we do not account that, we are
>> + * stealing bandwidth from the system at each deadline miss!
>> + */
>> + if (dmiss) {
>> + dl_se->runtime = rorun ? dl_se->runtime : 0;
>> + dl_se->runtime -= rq->clock - dl_se->deadline;
>> + }
>
> So ideally this can't happen, but since we already leak time from the
> system through means of hardirq / kstop / context-switch-overhead /
> clock-jitter etc.. we avoid the error accumulating?
>
Yep, seems fair :-).
>> +
>> + return 1;
>> +}
>
>
Thanks,
- Juri
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists