[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070423032215.GC25162@wotan.suse.de>
Date: Mon, 23 Apr 2007 05:22:15 +0200
From: Nick Piggin <npiggin@...e.de>
To: Ingo Molnar <mingo@...e.hu>
Cc: linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Con Kolivas <kernel@...ivas.org>,
Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Peter Williams <pwil3058@...pond.net.au>,
Thomas Gleixner <tglx@...utronix.de>, caglar@...dus.org.tr,
Willy Tarreau <w@....eu>,
Gene Heskett <gene.heskett@...il.com>, Mark Lord <lkml@....ca>,
Ulrich Drepper <drepper@...hat.com>
Subject: Re: [patch] CFS scheduler, -v5
On Mon, Apr 23, 2007 at 04:55:53AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <npiggin@...e.de> wrote:
>
> > > the biggest user-visible change in -v5 are various interactivity
> > > improvements (especially under higher load) to fix reported
> > > regressions, and an improved way of handling nice levels. There's
> > > also a new sys_sched_yield_to() syscall implementation for i686 and
> > > x86_64.
> > >
> > > All known regressions have been fixed. (knock on wood)
> >
> > I think the granularity is still much too low. Why not increase it to
> > something more reasonable as a default?
>
> note that CFS's "granularity" value is not directly comparable to
> "timeslice length":
Right, but it does introduce the kbuild regression, and as we
discussed, this will be only worse on newer CPUs with bigger
caches or less naturally context switchy workloads.
> > [ Note: while CFS's default preemption granularity is currently set to
> > 5 msecs, this value does not directly transform into timeslices: for
> > example two CPU-intense tasks will have effective timeslices of 10
> > msecs with this setting. ]
>
> also, i just checked SD: 0.46 defaults to 8 msecs rr_interval (on 1 CPU
> systems), which is lower than the 10 msecs effective timeslice length
> CVS-v5 achieves on two CPU-bound tasks.
This is about an order of magnitude more than the current scheduler, so
I still think it is too small.
> (in -v6 i'll scale the granularity up a bit with the number of CPUs,
> like SD does. That should get the right result on larger SMP boxes too.)
I don't really like the scaling with SMP thing. The cache effects are
still going to be significant on small systems, and there are lots of
non-desktop users of those (eg. clusters).
> while i agree it's a tad too finegrained still, I agree with Con's
> choice: rather err on the side of being too finegrained and lose some
> small amount of throughput on cache-intense workloads like compile jobs,
> than err on the side of being visibly too choppy for users on the
> desktop.
So cfs gets too choppy if you make the effective timeslice comparable
to mainline?
My approach is completely the opposite. For testing, I prefer to make
the timeslice as large as possible so any problems or regressions are
really noticable and will be reported; it can be scaled back to be
smaller once those kinks are ironed out.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists