[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48C81F12.9040208@goop.org>
Date: Wed, 10 Sep 2008 12:25:06 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Rambaldi <rambaldi@...all.nl>
CC: linux-kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: 2.6.27-rc6 xen soft lockup
Rambaldi wrote:
> The machine has two Intel(R) Xeon(R) E5420's so that gives a total of
> 8 cpu's
> During the time of the lockup the cpu load, as measured with cacti,
> was about 4%
> with a increase to 15% at the time the BUG was triggered. So I would
> say mostly idle
> but not very idle.
So that's the cpu load within the domain? How about the overall system
load? What other domains are running?
>
> > Did anything fail or misbehave?
> No nothing failed or misbehaved (as far as I could tell)
>
> With dynticks I guess you mean: CONFIG_NO_HZ ; this option is not set.
(In general its a good idea to set it for virtual machines, to avoid
spuriously scheduling vcpus.)
> I have attached my .config. I have also attached the output of
> (date ; cat /proc/interrupts ; sleep 10 ; date ; cat /proc/interrupts
> )> /tmp/interrupts
> to give an impression about the number of interrupts after 11:30 hours
> of uptime.
Well, there were 1001 interrupts on cpu 1 in that interval, which shows
that the timer interrupts are going at full rate on the idle cpu.
I'm a bit confused. I'm not sure what would trigger a lockup at that
point, unless it really stopped taking interrupts for a while.
Unfortunately the RIP and backtrace are not particularly helpful. I'm
assuming the message is spurious, and indicates some other kind of
timekeeping bug.
> Any other info that you need?
Full dmesg output, for completeness.
J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists