[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1173613149.6927.29.camel@Homer.simpson.net>
Date: Sun, 11 Mar 2007 12:39:09 +0100
From: Mike Galbraith <efault@....de>
To: Con Kolivas <kernel@...ivas.org>
Cc: linux kernel mailing list <linux-kernel@...r.kernel.org>,
ck list <ck@....kolivas.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH][RSDL-mm 0/7] RSDL cpu scheduler for 2.6.21-rc3-mm2
Hi Con,
On Sun, 2007-03-11 at 14:57 +1100, Con Kolivas wrote:
> What follows this email is a patch series for the latest version of the RSDL
> cpu scheduler (ie v0.29). I have addressed all bugs that I am able to
> reproduce in this version so if some people would be kind enough to test if
> there are any hidden bugs or oops lurking, it would be nice to know in
> anticipation of putting this back in -mm. Thanks.
>
> Full patch for 2.6.21-rc3-mm2:
> http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc3-mm2-rsdl-0.29.patch
I'm seeing a cpu distribution problem running this on my P4 box.
Scenario:
listening to music collection (mp3) via Amarok. Enable Amarok
visualization gforce, and size such that X and gforce each use ~50% cpu.
Start rip/encode of new CD with grip/lame encoder. Lame is set to use
both cpus, at nice 5. Once the encoders start, they receive
considerable more cpu than nice 0 X/Gforce, taking ~120% and leaving the
remaining 80% for X/Gforce and Amarok (when it updates it's ~12k entry
database) to squabble over.
With 2.6.21-rc3, X/Gforce maintain their ~50% cpu (remain smooth), and
the encoders (100%cpu bound) get whats left when Amarok isn't eating it.
I plunked the above patch into plain 2.6.21-rc3 and retested to
eliminate other mm tree differences, and it's repeatable. The nice 5
cpu hogs always receive considerably more that the nice 0 sleepers.
-Mike
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists