[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070414105338.GB19454@elte.hu>
Date: Sat, 14 Apr 2007 12:53:39 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Willy Tarreau <w@....eu>
Cc: Nick Piggin <npiggin@...e.de>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Con Kolivas <kernel@...ivas.org>,
Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]
* Willy Tarreau <w@....eu> wrote:
> Forking becomes very slow above a load of 100 it seems. Sometimes, the
> shell takes 2 or 3 seconds to return to prompt after I run "scheddos
> &"
this might be changed/impacted by the parent-requeue fix that is in the
updated (for real, promise! ;) patch. Right now on CFS a forking parent
shares its own run stats with the child 50%/50%. This means that heavy
forkers are indeed penalized. Another logical choice would be 100%/0%: a
child has to earn its own right.
i kept the 50%/50% rule from the old scheduler, but maybe it's a more
pristine (and smaller/faster) approach to just not give new children any
stats history to begin with. I've implemented an add-on patch that
implements this, you can find it at:
http://redhat.com/~mingo/cfs-scheduler/sched-fair-fork.patch
> Those are very promising results, I nearly observe the same
> responsiveness as I had on a solaris 10 with 10k running processes on
> a bigger machine.
cool and thanks for the feedback! (Btw., as another test you could also
try to renice "scheddos" to +19. While that does not push the scheduler
nearly as hard as nice 0, it is perhaps more indicative of how a truly
abusive many-tasks workload would be run in practice.)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists