[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090215110104.GB31351@elte.hu>
Date: Sun, 15 Feb 2009 12:01:04 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Damien Wyart <damien.wyart@...e.fr>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>,
Frédéric Weisbecker <fweisbec@...il.com>
Cc: "Rafael J. Wysocki" <rjw@...k.pl>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Kernel Testers List <kernel-testers@...r.kernel.org>
Subject: Re: [Bug #12650] Strange load average and ksoftirqd behavior with
2.6.29-rc2-git1
* Damien Wyart <damien.wyart@...e.fr> wrote:
> So I followed the tracing steps in the tutorial (with the 1 sec sleep),
> which gave me this:
> http://damien.wyart.free.fr/trace_2.6.29-rc5_ksoftirqd_prob.txt.gz
thanks. There's definitely some weirdness visible in the trace,
for example:
0) gpm-1879 => ksoftir-4
------------------------------------------
0) 0.964 us | finish_task_switch();
0) ! 1768184 us | }
0) | do_softirq() {
0) | __do_softirq() {
0) | rcu_process_callbacks() {
the 1.7 seconds 'overhead' there must be a fluke - you'd notice it if
ksoftirqd _really_ took that much time to execute.
One possibility for these symptoms would be broken scheduler timestamps.
Could you enable absolute timestamp printing via:
echo funcgraph-abstime > trace_options
Also, my guess is that if you boot via idle=poll, the symptoms go away.
This would strengthen the suspicion that it's scheduler-clock troubles.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists