[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTi=22QFrJ4vO7-3VuHU=9Cg39bxJ4Q@mail.gmail.com>
Date: Mon, 27 Jun 2011 19:25:31 -0700
From: john stultz <johnstul@...ibm.com>
To: Faidon Liambotis <paravoid@...ian.org>
Cc: linux-kernel@...r.kernel.org, stable@...nel.org,
Nikola Ciprich <nikola.ciprich@...uxbox.cz>,
seto.hidetoshi@...fujitsu.com,
Hervé Commowick <hcommowick@...sec.fr>,
Willy Tarreau <w@....eu>, Randy Dunlap <rdunlap@...otime.net>,
Greg KH <greg@...ah.com>, Ben Hutchings <ben@...adent.org.uk>,
Apollon Oikonomopoulos <apoikos@...il.com>
Subject: Re: 2.6.32.21 - uptime related crashes?
On Sat, Apr 30, 2011 at 10:39 AM, Faidon Liambotis <paravoid@...ian.org> wrote:
> We too experienced problems with just the G6 blades at near 215 days uptime
> (on the 19th of April), all at the same time. From our investigation, it
> seems that their cpu_clocks jumped suddenly far in the future and then
> almost immediately rolled over due to wrapping around 64-bits.
>
> Although all of their (G6s) clocks wrapped around *at the same time*, only
> one
> of them actually crashed at the time, with a second one crashing just a few
> days later, on the 28th.
>
> Three of them had the following on their logs:
> Apr 18 20:56:07 hn-05 kernel: [17966378.581971] tap0: no IPv6 routers
> present
> Apr 19 10:15:42 hn-05 kernel: [18446743935.365550] BUG: soft lockup - CPU#4
> stuck for 17163091968s! [kvm:25913]
So, did this issue ever get any traction or get resolved?
>From the softlockup message, I suspect we hit a multiply overflow in
the underlying sched_clock() implementation.
Because the goal of sched_clock is to be very fast, lightweight and
safe from locking issues (so it can be called anywhere) handling
transient corner cases internally has been avoided as they would
require costly locking and extra overhead. Because of this,
sched_clock users should be cautious to be robust in the face of
transient errors.
Peter: I wonder if the soft lockup code should be using the
(hopefully) more robust timekeeping code (ie: get_seconds) for its
get_timestamp function? I'd worry that you might have issues catching
cases where the system was locked up so the timekeeping accounting
code didn't get to run, but you have the same problem in the jiffies
based sched_clock code as well (since timekeeping increments jiffies
in most cases).
That said, I didn't see from any of the backtraces in this thread why
the system actually crashed. The softlockup message on its own
shouldn't do that, so I suspect there's still a related issue
somewhere else here.
thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists