[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071129105744.GI10577@elte.hu>
Date: Thu, 29 Nov 2007 11:57:44 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Miklos Szeredi <miklos@...redi.hu>
Cc: jdike@...toit.com, user-mode-linux-devel@...ts.sourceforge.net,
linux-kernel@...r.kernel.org
Subject: Re: scheduling anomaly on uml (was: -rt doesn't compile for UML)
* Miklos Szeredi <miklos@...redi.hu> wrote:
> I can't say I'm understading these traces very well, but here's a
> snippet that looks a bit strange. I'm running 'while true; do date;
> done' in parallel with the dd.
>
> For some time it is doing 100% CPU as expected, then it goes into a
> second or so of mosty idle (afaics), and then returns to the normal
> pattern again.
try:
echo 1 > /proc/sys/kernel/stackframe_tracing
to get symbolic stack backdumps for the wakeup points, and add
trace_special_sym() calls to generate extra stackdump entries at
arbitrary places. schedule() does not have it right now - it might make
sense to add it.
also, enabling mcount:
echo 1 > /proc/sys/kernel/mcount_enabled
will give you a _lot_ more verbose trace. Likewise:
echo 1 > /proc/sys/kernel/syscall_tracing
(but for that you'd have to add the sys_call()/sys_ret() instrumentation
that x86 has in entry_32.S)
but even this highlevel trace shows something weird:
> events/0-4 0.... 16044512us+: schedule <<idle>-0> (20 -5)
> <idle>-0 0.... 16044564us!: schedule <events/0-4> (-5 20)
> <idle>-0 0.Nh. 16076072us+: __trace_start_sched_wakeup <date-7133> (120 -1)
> <idle>-0 0.Nh. 16076075us+: __trace_start_sched_wakeup <dd-6444> (120 -1)
> <idle>-0 0.Nh. 16076078us+: __trace_start_sched_wakeup <kswapd0-33> (115 -1)
> dd-6444 0.... 16076104us+: schedule <<idle>-0> (20 0)
how come UML idled for 30 msecs here, while the workload was supposed to
be CPU-bound? It's not IO bound anywhere, right? No SMP artifacts
either, right?
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists