[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070717075346.GB13539@elte.hu>
Date: Tue, 17 Jul 2007 09:53:46 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Roman Zippel <zippel@...ux-m68k.org>
Cc: Matt Mackall <mpm@...enic.com>, James Bruce <bruce@...rew.cmu.edu>,
Thomas Gleixner <tglx@...utronix.de>,
Mike Galbraith <efault@....de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
Arjan van de Ven <arjan@...radead.org>,
Chris Wright <chrisw@...s-sol.org>
Subject: Re: [PATCH] CFS: Fix missing digit off in wmult table
* Roman Zippel <zippel@...ux-m68k.org> wrote:
> > > It's nice that these artifacts are gone, but that still doesn't
> > > explain why this ratio had to be increase that much from around
> > > 1:10 to 1:69.
> >
> > More dynamic range is better? If you actually want a task to get 20x
> > the CPU time of another, the older scheduler doesn't really allow
> > it.
>
> You can already have that, the complete range level from 19 to -20 was
> about 1:80.
But that is irrelevant: all tasks start out at nice 0, and what matters
is the dynamic range around 0.
So the dynamic range has been made uniform in the positive from
1:10...1:20...1:30 to 1:69 for nice +19, and from 1:8 to 1:69 in the
minus. (with 1:86 nice -20) If you look at the negative nice levels
alone it's a substantial increase but if you compare it with positive
nice levels you'll similar kinds of dynamic ranges were already present
in the old scheduler and you'll see why we've done it.
Negative nice levels are admin-controlled, the increase in the negative
levels is is not a big issue and people actually like the increased
dynamic range and the consistency. The positive range _might_ be a
bigger issue but there we were largely inconsistent anyway, and again,
people like the increased dynamic range.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists