[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <470F66E7.7010509@tmr.com>
Date: Fri, 12 Oct 2007 08:21:59 -0400
From: Bill Davidsen <davidsen@....com>
To: Ingo Molnar <mingo@...e.hu>
CC: Nick Piggin <nickpiggin@...oo.com.au>,
Nicholas Miell <nmiell@...cast.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.23
Ingo Molnar wrote:
> * Nick Piggin <nickpiggin@...oo.com.au> wrote:
>
>> ;) I think you snipped the important bit:
>>
>> "the peak is terrible but it has virtually no dropoff and performs
>> better under load than the default 2.6.21 scheduler." (verbatim)
>
> hm, i understood that peak remark to be in reference to FreeBSD's
> scheduler (which the FreeBSD guys are primarily interested in
> obviously), not v2.6.21 - but i could be wrong.
>
> In any case, there is indeed a regression with sysbench and a low number
> of threads, and it's being fixed. The peak got improved visibly in
> sched-devel:
>
> http://people.redhat.com/mingo/misc/sysbench-sched-devel.jpg
>
> but there is still some peak regression left, i'm testing a patch for
> that.
>
There's one important bit missing from that graph, the
2.6.23-SCHED_BATCH values. Without that we can't tell how much
improvement is from sched-devel and how much from SCHED_BATCH. Clearly
2.6.23 is better than 2.6.22.any in this test, the locking issues seem
to dominate that difference to the point that nothing else would be
informative.
This weekend I have to do some building of kernels for various machines,
so I intend to run some builds SCHED_BATCH and some will just run. If I
find anything interesting I'll report.
--
Bill Davidsen <davidsen@....com>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists