[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070413234444.GA23575@elte.hu>
Date: Sat, 14 Apr 2007 01:44:44 +0200
From: Ingo Molnar <mingo@...e.hu>
To: William Lee Irwin III <wli@...omorphy.com>
Cc: linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Con Kolivas <kernel@...ivas.org>,
Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]
* William Lee Irwin III <wli@...omorphy.com> wrote:
> Where it gets complex is when the behavior patterns vary, e.g. they're
> not entirely CPU-bound and their desired in-isolation CPU utilization
> varies, or when nice levels vary, or both vary. [...]
yes. I tested things like 'massive_intr.c' (attached, written by Satoru
Takeuchi) which starts N tasks which each work for 8msec then sleep
1msec:
from its output, the second column is the CPU time each thread got, the
more even, the fairer the scheduling. On vanilla i get:
mercury:~> ./massive_intr 10 10
024873 00000150
024874 00000123
024870 00000069
024868 00000068
024866 00000051
024875 00000206
024872 00000093
024869 00000138
024867 00000078
024871 00000223
on CFS i get:
neptune:~> ./massive_intr 10 10
002266 00000112
002260 00000113
002261 00000112
002267 00000112
002269 00000112
002265 00000112
002262 00000113
002268 00000113
002264 00000112
002263 00000113
so it is quite a bit more even ;)
another related test-utility is one i wrote:
http://people.redhat.com/mingo/scheduler-patches/ring-test.c
this is a ring of 100 tasks each doing work for 100 msecs and then
sleeping for 1 msec. I usually test this by also running a CPU hog in
parallel to it, and checking whether it gets ~50.0% of CPU time under
CFS. (it does)
Ingo
View attachment "massive_intr.c" of type "text/plain" (9834 bytes)
Powered by blists - more mailing lists