[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070514115049.GA28721@elte.hu>
Date: Mon, 14 May 2007 13:50:49 +0200
From: Ingo Molnar <mingo@...e.hu>
To: William Lee Irwin III <wli@...omorphy.com>
Cc: Srivatsa Vaddagiri <vatsa@...ibm.com>, efault@....de,
tingy@...umass.edu, linux-kernel@...r.kernel.org
Subject: Re: fair clock use in CFS
* William Lee Irwin III <wli@...omorphy.com> wrote:
> On Mon, May 14, 2007 at 12:31:20PM +0200, Ingo Molnar wrote:
> > please clarify - exactly what is a mistake? Thanks,
>
> The variability in ->fair_clock advancement rate was the mistake, at
> least according to my way of thinking. [...]
you are quite wrong. Lets consider the following example:
we have 10 tasks running (all at nice 0). The current task spends 20
msecs on the CPU and a new task is picked. How much CPU time did that
waiting task get entitled to during its 20 msecs wait? If fair_clock was
constant as you suggest then we'd give it 20 msecs - but its true 'fair
expectation' of CPU time was only 20/10 == 2 msecs!
So a 'constant' fair_clock would turn the whole equilibrium upside down
(it would inflate p->wait_runtime values and the global sum would not be
roughly constant anymore but would run up very fast), especially during
fluctuating loads.
the fair_clock is the fundamental expression of "fair CPU timeline", and
task's expected runtime is always measured by that, not by the real
clock. The only time when we measure the true time is when a _single_
task runs on the CPU - but in that case the task truly spent a small
amount of time on the CPU, exclusively. See the exec_time calculations
in kernel/sched_fair.c.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists