[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070828094438.2af76c39@laptopd505.fenrus.org>
Date: Tue, 28 Aug 2007 09:44:38 -0700
From: Arjan van de Ven <arjan@...radead.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Al Boldi <a1426z@...ab.com>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
Mike Galbraith <efault@....de>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: CFS review
On Tue, 28 Aug 2007 09:34:03 -0700 (PDT)
Linus Torvalds <torvalds@...ux-foundation.org> wrote:
>
>
> On Tue, 28 Aug 2007, Al Boldi wrote:
> >
> > I like your analysis, but how do you explain that these stalls
> > vanish when __update_curr is disabled?
>
> It's entirely possible that what happens is that the X scheduling is
> just a slightly unstable system - which effectively would turn a
> small scheduling difference into a *huge* visible difference.
one thing that happens if you remove __update_curr is the following
pattern (since no apps will get preempted involuntarily)
app 1 submits a full frame worth of 3D stuff to X
app 1 then sleeps/waits for that to complete
X gets to run, has 1 full frame to render, does this
X now waits for more input
app 2 now gets to run and submits a full frame
app 2 then sleeps again
X gets to run again to process and complete
X goes to sleep
app 3 gets to run and submits a full frame
app 3 then sleeps
X runs
X sleeps
app 1 gets to submit a frame
etc etc
so without preemption happening, you can get "perfect" behavior, just
because everything is perfectly doing 1 thing at a time cooperatively.
once you start doing timeslices and enforcing limits on them, this
"perfect pattern" will break down (remember this is all software
rendering in the problem being described), and whatever you get won't
be as perfect as this.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists