[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070724231400.GA18307@gnuppy.monkey.org>
Date: Tue, 24 Jul 2007 16:14:00 -0700
From: Bill Huey (hui) <billh@...ppy.monkey.org>
To: Chris Snook <csnook@...hat.com>
Cc: Chris Friesen <cfriesen@...tel.com>, Tong Li <tong.n.li@...el.com>,
mingo@...e.hu, linux-kernel@...r.kernel.org,
Con Kolivas <kernel@...ivas.org>,
"Bill Huey (hui)" <billh@...ppy.monkey.org>
Subject: Re: [RFC] scheduler: improve SMP fairness in CFS
On Tue, Jul 24, 2007 at 05:22:47PM -0400, Chris Snook wrote:
> Bill Huey (hui) wrote:
> Well, you need enough CPU time to meet your deadlines. You need
> pre-allocated memory, or to be able to guarantee that you can allocate
> memory fast enough to meet your deadlines. This principle extends to any
> other shared resource, such as disk or network. I'm being vague because
> it's open-ended. If a medical device fails to meet realtime guarantees
> because the battery fails, the patient's family isn't going to care how
> correct the software is. Realtime engineering is hard.
...
> Actually, it's worse than merely an open problem. A clairvoyant fair
> scheduler with perfect future knowledge can underperform a heuristic fair
> scheduler, because the heuristic scheduler can guess the future incorrectly
> resulting in unfair but higher-throughput behavior. This is a perfect
> example of why we only try to be as fair as is beneficial.
I'm glad we agree on the above points. :)
It might be that there needs to be another more stiff policy than what goes
into SCHED_OTHER in that we also need a SCHED_ISO or something has more
strict rebalancing semantics for -rt applications, sort be a super SCHED_RR.
That's definitely needed and I don't see how the current CFS implementation
can deal with this properly even with numerical running averages, etc...
at this time.
SCHED_FIFO is another issue, but this actually more complicated than just
per cpu run queues in that a global priority analysis. I don't see how
CFS can deal with SCHED_FIFO efficiently without moving to a single run
queue. This is kind of a complicated problem with a significant set of
trade off to take into account (cpu binding, etc..)
>> Tong's previous trio patch is an attempt at resolving this using a generic
>> grouping mechanism and some constructive discussion should come of it.
>
> Sure, but it seems to me to be largely orthogonal to this patch.
It's based on the same kinds of ideas that he's been experimenting with in
Trio. I can't name a single other engineer that's posted to lkml recently
that has quite the depth of experience in this area than him. It would be
nice to facilitted/incorporate some his ideas or get him to and work on
something to this end that's suitable for inclusion in some tree some where.
bill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists