[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1219757487.31371.19.camel@matrix>
Date: Tue, 26 Aug 2008 15:31:27 +0200
From: Stefani Seibold <stefani@...bold.net>
To: Theodore Tso <tytso@....edu>
Cc: Nick Piggin <nickpiggin@...oo.com.au>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
linux-kernel@...r.kernel.org, Dario Faggioli <raistlin@...ux.it>,
Max Krasnyansky <maxk@...lcomm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 6/6] sched: disabled rt-bandwidth by default
Am Dienstag, den 26.08.2008, 08:50 -0400 schrieb Theodore Tso:
> On Tue, Aug 26, 2008 at 09:27:26PM +1000, Nick Piggin wrote:
> >
> > Oh with this much handwaving from you old timers I feel much better
> > about it ;) I bet before the bug report and change to 10s, any
> > application that hogged the CPU for more than 0.9 seconds was just
> > broken too, right? But 10s is more than enough for everybody?
> >
Sorry, the world of embedded programming is sometime stranger than in
theory. Normally it would not happen that a real-time process locks the
CPU for more than 1 sec. But in some circumstances, especially FPGA
initialisation and long term measurements it is possible that the
real-time process locks the cpu for more than a, sometime for more than
10 sec. If the embedded program has designed it in that way, this
behaviour is desired.
> Actually, any real-time application which hogs the CPU at a high
> real-time priority for more than one second is probably doing
> something broken. The whole point of high real-time priorities is to
> do something really fast, get in and get out. Usually such routines
> are measured in milliseconds or microseconds.
> Think about it *this* way --- what would you think of some device
> driver which hogged an interrupt for a full second, never mind 10
> seconds. You'd say it was broken, right? Now consider that a high
> real-time priority thread might be running at a higher priority than
> interrupt handlers, and in fact could preempt interrupt handlers....
>
> > > Simply because we use common sense instead of following every single
> > > POSIX brainfart by the letter.
> >
> > How is that a brainfart? It is simple, relatively unambiguous, and not
> > arbitrary. You really say the POSIX specified behaviour is "a brainfart",
> > but adding an arbitrary 10s throttle "but the process might be preempted
> > and lose the CPU to a lower priority task if it uses 10s of consecutive
> > CPU time" would eliminate that brainfart? I have to laugh.
>
> We've not followed POSIX before when it hasn't made sense. For
> example, "df" and "du" report its output in kilobytes, instead of 512
> byte sectors, per POSIX's demands.
>
This has nothing to do with POSIX. It is standard real time behaviour.
RT Programming is a job like writing device drivers. U must know what
you do.
Modify the scheduler in that way that a realtime process will give away
the CPU after a given time will certain break some embedded application.
Don't think only in desktop or enterprise LINUX boxes, there a much more
LINUX embedded devices on this planet and not less of them rely on the
old scheduler behaviour.
The LINUX base guideline is simple in that way, that the kernel will
never break userland applications.
> > root is allowed to shoot themselves in the foot. root is the safeguard.
>
> We've done things before to make things harder for root; for example
> we've restricted what /dev/mem can do. And root can always lift the
> ulimit.
>
> - Ted
What coming at next? A device driver manager, which kills any driver
which use to much CPU resource? Or throttle/kicks off the responsible
driver if the hardware generates to many interrupts?
Kernel and embedded real time programmer should know what there do.
Stefani
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists