[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070801112229.GA11710@elte.hu>
Date: Wed, 1 Aug 2007 13:22:29 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Roman Zippel <zippel@...ux-m68k.org>
Cc: Mike Galbraith <efault@....de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: CFS review
* Roman Zippel <zippel@...ux-m68k.org> wrote:
> [...] e.g. in this example there are three tasks that run only for
> about 1ms every 3ms, but they get far more time than should have
> gotten fairly:
>
> 4544 roman 20 0 1796 520 432 S 32.1 0.4 0:21.08 lt
> 4545 roman 20 0 1796 344 256 R 32.1 0.3 0:21.07 lt
> 4546 roman 20 0 1796 344 256 R 31.7 0.3 0:21.07 lt
> 4547 roman 20 0 1532 272 216 R 3.3 0.2 0:01.94 l
Mike and me have managed to reproduce similarly looking 'top' output,
but it takes some effort: we had to deliberately run a non-TSC
sched_clock(), CONFIG_HZ=100, !CONFIG_NO_HZ and !CONFIG_HIGH_RES_TIMERS.
in that case 'top' accounting symptoms similar to the above are not due
to the scheduler starvation you suspected, but due the effect of a
low-resolution scheduler clock and a tightly coupled timer/scheduler
tick to it. I tried the very same workload on 2.6.22 (with the same
.config) and i saw similarly anomalous 'top' output. (Not only can one
create really anomalous CPU usage, one can completely hide tasks from
'top' output.)
if your test-box has a high-resolution sched_clock() [easily possible]
then please send us the lt.c and l.c code so that we can have a look.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists