[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200704192153.48530.kernel@kolivas.org>
Date: Thu, 19 Apr 2007 21:53:48 +1000
From: Con Kolivas <kernel@...ivas.org>
To: Nick Piggin <npiggin@...e.de>
Cc: ck@....kolivas.org, Al Boldi <a1426z@...ab.com>,
Ingo Molnar <mingo@...e.hu>, Matt Mackall <mpm@...enic.com>,
Peter Williams <pwil3058@...pond.net.au>,
Mike Galbraith <efault@....de>,
Bill Huey <billh@...ppy.monkey.org>,
William Lee Irwin III <wli@...omorphy.com>,
Gene Heskett <gene.heskett@...il.com>,
Willy Tarreau <w@....eu>,
linux kernel mailing list <linux-kernel@...r.kernel.org>
Subject: Re: Announce - Staircase Deadline cpu scheduler v0.42
On Thursday 19 April 2007 20:22, Nick Piggin wrote:
> On Thu, Apr 19, 2007 at 07:40:04PM +1000, Con Kolivas wrote:
> > On Thursday 19 April 2007 13:22, Nick Piggin wrote:
> > > On Thu, Apr 19, 2007 at 12:12:14PM +1000, Con Kolivas wrote:
> > > > Version 0.42
> > > >
> > > > http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc7-sd-0.42.p
> > > >atch
> > >
> > > OK, I run some tests later today...
> >
> > Thank you very much.
>
> lmbench numbers are roughly comparable to mainline (lmbench seems to be
> a bit erratic, but there isn't the obvious drop that cfs has).
Great.
Thanks again for doing these.
> Didn't worry about hackbench ;)
>
> kbuild:
> 2.6.21-rc7
> 508.87user 32.47system 2:17.82elapsed 392%CPU
> 509.05user 32.25system 2:17.84elapsed 392%CPU
> 508.75user 32.26system 2:17.83elapsed 392%CPU
> 508.63user 32.17system 2:17.88elapsed 392%CPU
> 509.01user 32.26system 2:17.90elapsed 392%CPU
> 509.08user 32.20system 2:17.95elapsed 392%CPU
>
> 2.6.21-rc7-sd42
> 512.78user 31.99system 2:18.41elapsed 393%CPU
> 512.55user 31.90system 2:18.57elapsed 392%CPU
> 513.05user 31.78system 2:18.48elapsed 393%CPU
> 512.46user 32.06system 2:18.63elapsed 392%CPU
> 512.78user 31.81system 2:18.49elapsed 393%CPU
> 512.41user 32.08system 2:18.70elapsed 392%CPU
>
> sd42 is doing about 745 context switches per second here, and perfomance is
> slightly below mainline. But it isn't doing badly.
Not bad. It's always impossible to know where the sweet spot will lie with
these things. Basically the higher the better.... for this one thing of
course.
> 507.87user 32.53system 2:17.50elapsed 392%CPU
> 508.47user 32.40system 2:17.56elapsed 393%CPU
> 508.59user 32.27system 2:17.53elapsed 393%CPU
>
> A few runs with rr_interval at 100 show that ctxsw numbers drop to 587, and
> performance is up to slightly above mainline.
Well they're nice numbers indeed. I don't even need to leave the maximum at
100ms but I seriously doubt we'd see much improvement beyond there... I'm
sure some renderfarm might enjoy values of 1 second though (like my -ck
patchet already offers in compute mode for old fashioned staircase cpu
sched).
> With the results I've got so far with all scedulers (actually I didn't try
> nicksched with a small timeslice, but I'm sure it would give the expected
> result)... I'd say 5ms might be too small a timeslice. Even 15ms will hurt
> some people I think.
On 4x (that's what your hardware was IIRC) SD would be setting it to 15ms.
Mainline would be timeslice granularity setting 4x to about 40ms. Pulling a
number out of my rear end generator I'd say 20ms for SD should make it close
enough without sacrificing latency too much more.
> Although we could arguably tolerate this kind of regression, my box only
> has 1MB caches, and kbuild is naturally context switching at over 500 per
> second anyway. On something with bigger caches and less context switchy /
> more cache sensitive workloads, the regression could be quite a bit worse.
>
> (not directed at anyone in particular, but food for thought)
Heck if I'm going to keep offering SD as an alternative for comparison I may
as well use the food you've offered me. I guess I better work on a v0.43 with
some more of these very easy to do tweaks then. Thanks!
--
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists