[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070313141718.GE10459@waste.org>
Date: Tue, 13 Mar 2007 09:17:18 -0500
From: Matt Mackall <mpm@...enic.com>
To: Mike Galbraith <efault@....de>
Cc: Ingo Molnar <mingo@...e.hu>, Con Kolivas <kernel@...ivas.org>,
linux kernel mailing list <linux-kernel@...r.kernel.org>,
ck list <ck@....kolivas.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
On Tue, Mar 13, 2007 at 10:33:18AM +0100, Mike Galbraith wrote:
> On Tue, 2007-03-13 at 09:18 +0100, Ingo Molnar wrote:
>
> > Con, we want RSDL to /improve/ interactivity. Having new scheduler
> > interactivity logic that behaves /worse/ in the presence of CPU hogs,
> > which CPU hogs are even reniced to +5, than the current interactivity
> > code, is i think a non-starter. Could you try to fix this, please? Good
> > interactivity in the presence of CPU hogs (be them default nice level or
> > nice +5) is _the_ most important scheduler interactivity metric.
> > Anything else is really secondary.
>
> I just retested with the encoders at nice 0, and the x/gforce combo is
> terrible. Funny thing though, x/gforce isn't as badly affected with a
> kernel build. Any build is quite noticable, but even at -j8, the effect
> doen't seem to be (very brief test warning applies) as bad as with only
> the two encoders running. That seems quite odd.
Is gforce calling sched_yield?
Can you try testing with some simpler loads, like these:
memload:
#!/usr/bin/python
a = "a" * 16 * 1024 * 1024
while 1:
b = a[1:] + "b"
a = b[1:] + "c"
execload:
#!/bin/sh
exec ./execload
forkload:
#!/bin/sh
./forkload&
pipeload:
#!/usr/bin/python
import os
pi, po = os.pipe()
if os.fork():
while 1:
os.write(po, "A" * 4096)
else:
while 1:
os.read(pi, 4096)
--
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists