[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070421083343.GB29800@elte.hu>
Date: Sat, 21 Apr 2007 10:33:43 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Bill Davidsen <davidsen@....com>
Cc: Nick Piggin <npiggin@...e.de>, Bill Huey <billh@...ppy.monkey.org>,
Mike Galbraith <efault@....de>,
Peter Williams <pwil3058@...pond.net.au>,
linux-kernel@...r.kernel.org, ck list <ck@....kolivas.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]
* Bill Davidsen <davidsen@....com> wrote:
> All of my testing has been on desktop machines, although in most cases
> they were really loaded desktops which had load avg 10..100 from time
> to time, and none were low memory machines. Up to CFS v3 I thought
> nicksched was my winner, now CFSv3 looks better, by not having
> stumbles under stupid loads.
nice! I hope CFSv4 kept that good tradition too ;)
> I have not tested:
> 1 - server loads, nntp, smtp, etc
> 2 - low memory machines
> 3 - uniprocessor systems
>
> I think this should be done before drawing conclusions. Or if someone
> has tried this, perhaps they would report what they saw. People are
> talking about smoothness, but not how many pages per second come out
> of their overloaded web server.
i tested heavily swapping systems. (make -j50 workloads easily trigger
that) I also tested UP systems and a handful of SMP systems. I have also
tested massive_intr.c which i believe is an indicator of how fairly CPU
time is distributed between partly sleeping partly running server
threads. But i very much agree that diverse feedback is sought and
welcome, both from those who are happy with the current scheduler and
those who are unhappy about it.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists