[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0707131728070.1817@scrub.home>
Date: Fri, 13 Jul 2007 19:23:41 +0200 (CEST)
From: Roman Zippel <zippel@...ux-m68k.org>
To: Mike Galbraith <efault@....de>
cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Andrea Arcangeli <andrea@...e.de>,
Andi Kleen <andi@...stfloor.org>, Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Arjan van de Ven <arjan@...radead.org>,
Chris Wright <chrisw@...s-sol.org>
Subject: Re: x86 status was Re: -mm merge plans for 2.6.23
Hi,
On Fri, 13 Jul 2007, Mike Galbraith wrote:
> > The new scheduler does _a_lot_ of heavy 64 bit calculations without any
> > attempt to scale that down a little...
>
> See prio_to_weight[], prio_to_wmult[] and sysctl_sched_stat_granularity.
> Perhaps more can be done, but "without any attempt..." isn't accurate.
Calculating these values at runtime would have been completely insane, the
alternative would be a crummy approximation, so using a lookup table is
actually a good thing. That's not the problem.
BTW could someone please verify the prio_to_wmult table, especially [16]
and [21] look a little off, like a digit was cut off.
While I'm at this, the 10% scaling there looks a little much (unless there
are other changes I haven't looked at yet), the old code used more like
5%. This would mean a prio -20 task would get 98.86% cpu time compared to
a prio 0 task, that was previously about the difference between -20 and
19 (and it would have previously gotten only 88.89%), now a prio -20 task
would get 99.98% cpu time compared to a prio 19 task.
The individual levels are unfortunately not that easily comparable, but at
the overall scale the change looks IMHO a little drastic.
bye, Roman
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists