[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200705031445.58178.mgd@technosis.de>
Date: Thu, 3 May 2007 14:45:45 +0200
From: Michael Gerdau <mgd@...hnosis.de>
To: Ingo Molnar <mingo@...e.hu>
Cc: linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Nick Piggin <npiggin@...e.de>,
Gene Heskett <gene.heskett@...il.com>,
Juliusz Chroboczek <jch@....jussieu.fr>,
Mike Galbraith <efault@....de>,
Peter Williams <pwil3058@...pond.net.au>,
ck list <ck@....kolivas.org>,
Thomas Gleixner <tglx@...utronix.de>,
William Lee Irwin III <wli@...omorphy.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Bill Davidsen <davidsen@....com>, Willy Tarreau <w@....eu>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6
> regarding the fairness of the different schedulers, please note the
> different runtimes for each component of the workload:
>
> LTMM: 5655.07/ 5682
> LTMB: 7729.81/ 7755
> LTBM: 7720.70/ 7746
>
> this means that a fair scheduler would _not_ be the one that finishes
> them first on wall-clock time (!). A fair scheduler would run each of
> them at 33% capacity until the fastest one (LTMM) reaches ~5650 seconds
> runtime and finishes, and _then_ the remaining ~2050 seconds of runtime
> would be done at 50%/50% capacity between the remaining two jobs. I.e.
> the fair wall-clock results should be around:
>
> LTMM: ~8500 seconds
> LTMB: ~10600 seconds
> LTBM: ~10600 seconds
>
> (but the IO portion of the workloads and other scheduling effects could
> easily shift these numbers by a few minutes.)
Correct. I will try to cook up some statistics for the next round
of results (been redoing cfs-v6, done cfs-v7 and then redone it
because of strange results and hopefully will do cfs-v8 tonight
as well as sd048 during the next days -- not easy to keep up with
you guys rolling out new versions ;-)
> regarding the results: it seems the wallclock portion of LTMM/j3 is too
> small - even though the 3 tasks ran in parallel, in the CFS test LTMM
> finished just as fast as if it were running alone, right?
Right. I'm really very surprised but this remained while redoing both
LTMM/j1 (twice!) and LTMM/j3 with cfs-v6.
I don't really understand that and now think it shows some hidden
"fairness deficiency" that hopefully is corrected in cfs-v8. At least
cfs-v7 did behave differently.
> That does not
> seem to be logical and indeed suggests some sort of testing artifact.
Since it doesn't appear with any other scheduler tested I'd not rule
out a feature of the code.
> That makes it hard to judge which scheduler achieved the above 'ideal
> fair distribution' of the workloads better - for some of the results it
> was SD, for some it was CFS - but the missing LTMM/j3 number makes it
> hard to decide it conclusively. They are certainly both close enough and
> the noise of the results seems quite high.
Oh, I'm confident both are excellent scheduler. On the other hand
I find mainline isn't that bad either, at least for this type of
load.
Anyway, as I wrote above I'm continuing my tests and am collecting
data for the various scheduler. Presumably over the weekend I'll
mail out the next round.
Best,
Michael
--
Technosis GmbH, Geschäftsführer: Michael Gerdau, Tobias Dittmar
Sitz Hamburg; HRB 89145 Amtsgericht Hamburg
Vote against SPAM - see http://www.politik-digital.de/spam/
Michael Gerdau email: mgd@...hnosis.de
GPG-keys available on request or at public keyserver
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists