[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070413235812.GY2986@holomorphy.com>
Date: Fri, 13 Apr 2007 16:58:12 -0700
From: William Lee Irwin III <wli@...omorphy.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Con Kolivas <kernel@...ivas.org>,
Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]
* William Lee Irwin III <wli@...omorphy.com> wrote:
>> Where it gets complex is when the behavior patterns vary, e.g. they're
>> not entirely CPU-bound and their desired in-isolation CPU utilization
>> varies, or when nice levels vary, or both vary. [...]
On Sat, Apr 14, 2007 at 01:44:44AM +0200, Ingo Molnar wrote:
> yes. I tested things like 'massive_intr.c' (attached, written by Satoru
> Takeuchi) which starts N tasks which each work for 8msec then sleep
> 1msec:
[...]
> another related test-utility is one i wrote:
> http://people.redhat.com/mingo/scheduler-patches/ring-test.c
> this is a ring of 100 tasks each doing work for 100 msecs and then
> sleeping for 1 msec. I usually test this by also running a CPU hog in
> parallel to it, and checking whether it gets ~50.0% of CPU time under
> CFS. (it does)
These are both tremendously useful. The code is also in rather good
shape so only minimal modifications (for massive_intr.c I'm not even
sure if any are needed at all) are needed to plug them into the test
harness I'm aware of. I'll queue them both for me to adjust and send
over to testers I don't want to burden with hacking on testcases I
myself am asking them to add to their suites.
-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists