[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070801143624.GA18043@elte.hu>
Date: Wed, 1 Aug 2007 16:36:24 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Roman Zippel <zippel@...ux-m68k.org>
Cc: Andi Kleen <andi@...stfloor.org>, Mike Galbraith <efault@....de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: CFS review
* Roman Zippel <zippel@...ux-m68k.org> wrote:
> > jiffies based sched_clock should be soon very rare. It's probably
> > not worth optimizing for it.
>
> I'm not so sure about that. sched_clock() has to be fast, so many
> archs may want to continue to use jiffies. [...]
i think Andi was talking about the vast majority of the systems out
there. For example, check out the arch demography of current Fedora
installs (according to the Smolt opt-in UUID based user metrics):
http://smolt.fedoraproject.org/
i686: 74743
x86_64: 18599
i386: 1208
ppc: 527
ppc64: 396
sparc64: 14
---------------
Total: 95488
even pure i386 (kernels, not systems) is a only 1.2% of all installs. By
the time the CFS kernel gets into a distro (a few months at minimum,
typically a year) this percentage will go down further. And embedded
doesnt really care about task-statistics corner cases [ (it likely
doesnt have 'top' installed - likely doesnt even have /proc mounted or
even built in ;-) ].
of course CFS should not do _worse_ stats than what we had before, and
should not break or massively misbehave. Also, anything sane we can do
for low-resolution arches we should do (and we already do quite a bit -
the while wmult stuff is to avoid expensive divisions) - and i regularly
booted CFS with a low-resolution clock to make sure it works. So i'm not
trying to duck anything, we've just got to keep our design priorities
right :-)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists