[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45F82C01.3000704@goop.org>
Date: Wed, 14 Mar 2007 10:08:17 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Daniel Walker <dwalker@...sta.com>
CC: john stultz <johnstul@...ibm.com>, Andi Kleen <ak@...e.de>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Con Kolivas <kernel@...ivas.org>,
Rusty Russell <rusty@...tcorp.com.au>,
Zachary Amsden <zach@...are.com>,
James Morris <jmorris@...ei.org>,
Chris Wright <chrisw@...s-sol.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
cpufreq@...ts.linux.org.uk,
Virtualization Mailing List <virtualization@...ts.osdl.org>
Subject: Re: Stolen and degraded time and schedulers
Daniel Walker wrote:
>> I suppose you could, but that seems more complex. I think you could
>> encode the same information in the measurement of how much work a cpu
>> actually got done while a process was scheduled on it.
>>
>
> I know it's more complex, but that seems more like the "right" thing to
> do.
Why's that?
I'm proposing that rather than using "time spent scheduled" as an
approximation of how much progress a process made on a particular CPU
during its timeslice, we should measure it directly. It seems to me that
this usefully encapsulates both the problems of variable-speed cpus and
hypervisors stealing time from guests.
The actual length of the timeslices is an orthogonal issue. It may be
that you want to give processes more cpu time by making their quanta
longer to compensate for lost cpu time, but that would affect their
real-time characteristics. Or you could keep the quanta small, and give
those processes more of them.
But all this is getting deep into scheduler design, which is not what I
want to get into; I'm just proposing a better metric for a scheduler to
use in whatever way it wants.
J
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists