[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20070927065447.GA28697@elte.hu>
Date: Thu, 27 Sep 2007 08:54:47 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Andr? Goddard Rosa <andre.goddard@...il.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>, linux-kernel@...r.kernel.org
Subject: Re: sched-devel feedback
* Andr? Goddard Rosa <andre.goddard@...il.com> wrote:
> Hi, Ingo , Mike and Peter!
>
> Just passing around to say that 2.6.23-rc8-sched-dev is the best
> scheduler ever to me. It's great for 3D games.
cool! :-)
> http://www.openarena.ws/?files is really great with this
> scheduler. I played a whole match without no slowdown, smooth playing
> all the time. I had never played it before like this, it made a huge
> difference to me. Even older CFS releases and -ck did not made it
> sooooo smooth. It was really smooth _all_ the time.
i'm wondering, in previous schedulers, under what situations did you
notice smoothness problems? Was the scenario in any way deterministic,
or just random delays that are hard to describe?
Just in case you have smoothness problems in the future, a good way of
measuring it objectively is to enable CONFIG_SCHED_DEBUG=y and
CONFIG_SCHEDSTATS=y and to monitor the se.wait_max field in the
/proc/PID/tasks/*/sched file[s].
Every time there's some ruckle or other smoothness problem, that field's
value should increase. (a few milliseconds up to a few dozen
milliseconds is fine normally - anything above 100 msecs is probably
less fine.) By looking at that latency field you can compare two
kernels. (And by echoing 0 to the sched file you can clear these stats.)
So by saying "under .23-rc8 se.wait_max was 50 msecs while in .24-rc1 it
increased to 250 msecs" everyone can effectively complain to us about
smoothness problems :-)
> The scheduler team did a really good work on this!
>
> Thank you so much for this great work,
you are welcome!
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists