[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070414080833.GL943@1wt.eu>
Date: Sat, 14 Apr 2007 10:08:34 +0200
From: Willy Tarreau <w@....eu>
To: Ingo Molnar <mingo@...e.hu>
Cc: Nick Piggin <npiggin@...e.de>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Con Kolivas <kernel@...ivas.org>,
Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]
On Sat, Apr 14, 2007 at 08:43:34AM +0200, Ingo Molnar wrote:
>
> * Ingo Molnar <mingo@...e.hu> wrote:
>
> > Nick noticed that upon fork we change parent->wait_runtime but we do
> > not requeue it within the rbtree.
>
> this fix is not complete - because the child runqueue is locked here,
> not the parent's. I've fixed this properly in my tree and have uploaded
> a new sched-modular+cfs.patch. (the effects of the original bug are
> mostly harmless, the rbtree position gets corrected the first time the
> parent reschedules. The fix might improve heavy forker handling.)
It looks like it did not reach your public dir yet.
BTW, I've given it a try. It seems pretty usable. I have also tried
the usual meaningless "glxgears" test with 12 of them at the same time,
and they rotate very smoothly, there is absolutely no pause in any of
them. But they don't all run at same speed, and top reports their CPU
load varying from 3.4 to 10.8%, with what looks like more CPU is
assigned to the first processes, and less CPU for the last ones. But
this is just a rough observation on a stupid test, I would not call
that one scientific in any way (and X has its share in the test too).
I'll perform other tests when I can rebuild with your fixed patch.
Cheers,
Willy
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists