[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4624759A.7090301@bigpond.net.au>
Date: Tue, 17 Apr 2007 17:22:02 +1000
From: Peter Williams <pwil3058@...pond.net.au>
To: William Lee Irwin III <wli@...omorphy.com>
CC: Davide Libenzi <davidel@...ilserver.org>,
Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
Con Kolivas <kernel@...ivas.org>, Ingo Molnar <mingo@...e.hu>,
ck list <ck@....kolivas.org>,
Bill Huey <billh@...ppy.monkey.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair
Scheduler [CFS]
William Lee Irwin III wrote:
> On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
>> I had a quick look at Ingo's code yesterday. Ingo is always smart to
>> prepare a main dish (feature) with a nice sider (code cleanup) to Linus ;)
>> And even this code does that pretty nicely. The deadline designs looks
>> good, although I think the final "key" calculation code will end up quite
>> different from what it looks now.
>
> The additive nice_offset breaks nice levels. A multiplicative priority
> weighting of a different, nonnegative metric of cpu utilization from
> what's now used is required for nice levels to work. I've been trying
> to point this out politely by strongly suggesting testing whether nice
> levels work.
>
>
> On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
>> I would suggest to thoroughly test all your alternatives before deciding.
>> Some code and design may look very good and small at the beginning, but
>> when you start patching it to cover all the dark spots, you effectively
>> end up with another thing (in both design and code footprint).
>> About O(1), I never thought it was a must (besides a good marketing
>> material), and O(log(N)) *may* be just fine (to be verified, of course).
>
> The trouble with thorough testing right now is that no one agrees on
> what the tests should be and a number of the testcases are not in great
> shape. An agreed-upon set of testcases for basic correctness should be
> devised and the implementations of those testcases need to be
> maintainable code and the tests set up for automated testing and
> changing their parameters without recompiling via command-line options.
>
> Once there's a standard regression test suite for correctness, one
> needs to be devised for performance, including interactive performance.
> The primary difficulty I see along these lines is finding a way to
> automate tests of graphics and input device response performance. Others,
> like how deterministically priorities are respected over progressively
> smaller time intervals and noninteractive workload performance are
> nowhere near as difficult to arrange and in many cases already exist.
> Just reuse SDET, AIM7/AIM9, OAST, contest, interbench, et al.
At this point, I'd like direct everyone's attention to the simloads package:
<http://downloads.sourceforge.net/cpuse/simloads-0.1.1.tar.gz>
which contains a set of programs designed to be used in the construction
of CPU scheduler tests. Of particular use is the aspin program which
can be used to launch tasks with specified sleep/wake characteristics.
Peter
--
Peter Williams pwil3058@...pond.net.au
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists