[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060730222052.GA3335@gnuppy.monkey.org>
Date: Sun, 30 Jul 2006 15:20:52 -0700
From: Bill Huey (hui) <billh@...ppy.monkey.org>
To: Theodore Tso <tytso@....edu>, Nicholas Miell <nmiell@...cast.net>,
Edgar Toernig <froese@....de>,
Neil Horman <nhorman@...driver.com>,
Jim Gettys <jg@...top.org>, "H. Peter Anvin" <hpa@...or.com>,
Dave Airlie <airlied@...il.com>,
Segher Boessenkool <segher@...nel.crashing.org>,
linux-kernel@...r.kernel.org, a.zummo@...ertech.it,
jg@...edesktop.org, Keith Packard <keithp@...thp.com>,
Ingo Molnar <mingo@...e.hu>,
Steven Rostedt <rostedt@...dmis.org>
Cc: "Bill Huey (hui)" <billh@...ppy.monkey.org>
Subject: Re: itimer again (Re: [PATCH] RTC: Add mmap method to rtc character driver)
On Sun, Jul 30, 2006 at 10:33:42AM -0400, Theodore Tso wrote:
> On Sat, Jul 29, 2006 at 06:39:36PM -0700, Bill Huey wrote:
> > No, I really ment scheduling.
>
> Bill,
>
> Do you mean frequency-based scheduling? This was mentioned, IIRC, in
> Gallmeister's book (Programming for the Real World, a must-read for
> those interested in Posix real-time interfaces) as a likely extension
> to the SCHED_RR/SCHED_FIFO scheduling policies and future additions to
> the struct sched_policy used by sched_setparam() at some future point.
Yes, I did.
> sched_policy). At least, that's the theory. The exact semantics of
> what would actually be useful to application is I believe a little
> unclear, and of course there is the question of whether there is
It's really up to the RT application to decide what it really wants. The
role of the kernel is to give it what it has requested within reason.
> sufficient reason to try to do this as part of a system-wide
> scheduler. Alternatively, it might be sufficient to do this sort of
It's it's basic form yes. It's a complicated topic and frequency based
schedulers are only one type in a family of these schedulers. These kind
of scheduler are still research-ish in nature and there isn't a real way
of dealing with them in with regard to soft cycles effectively yet.
The control parameters to these systems vary from algorithm to algorithm
and they all have different control knobs outside of traditional Posix APIs.
People have written implementations based on EDF and stuff, but it seem
that folks can do a better job with scheduling decision if you had a thread
yield operation that was capable of telling the scheduler policy what to do
next with next cycle or chunk of time, especially for softer periods that
may give it's own allocated cycle to another process category. My
suggestion was that a modified 'itimer' could cover the semantic expression
of these kinds of schedulers, other kind of CPU bandwidth schedulers, as
well as be a replacement for '/dev/rtc' if it conformed that device's API.
The 'rtc' case would be a "harder", with respect time, expression of those
schedulers since that drive doesn't understand soft execution periods
and period of execution if strict. My terminology might need to be updated
or clarified and I'm open to that from others.
A new 'itimer' device with an extended API could also synchronously listen
to certain interrupts and deliver that as a latency critical event.
Another big topic of discussion.
> thing at the application level across cooperating threads, in which
> case it wouldn't be necessary to try to add this kind of complicated
> scheduling gorp into the kernel.
Scheduling policies are limited in Linux and that's probably going to
have to change in the future because of the RT patch and Xen, etc... Xen
is going to need a gang scheduler (think sleeping on a spin lock in a guest
OS).
> In any case, I don't think this is particularly interesting to the X
> folks, although there may very well be real-time applications that
> would find this sort of thing useful.
Right, the original topic has shifted. It's more interesting to me now. :)
bill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists