[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE9FiQVNZHTxpNCnn+YPtqjWrQjFmtTh06sbBawqFonMFwSZ-w@mail.gmail.com>
Date: Thu, 16 Oct 2014 09:36:35 -0700
From: Yinghai Lu <yinghai@...nel.org>
To: Wilmer van der Gaast <wilmer@...st.net>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Pavel Machek <pavel@....cz>,
Rafael Wysocki <rafael.j.wysocki@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: Machine crashes right *after* ~successful resume
On Thu, Oct 16, 2014 at 2:36 AM, Wilmer van der Gaast <wilmer@...st.net> wrote:
> Hello,
>
> On 16-10-14 05:32, Yinghai Lu wrote:
>>
>>
>> Can you please try attached patch? that should workaround the problem.
>>
> Sadly, no luck. (I do assume you meant me to use the patch against a clean
> 3.17 tree *without* yesterday's revert patch applied.) Back to a crash
> at/after the third resume:
>
> [ 372.502897] usb 3-1.1: reset high-speed USB device number 3 using
> ehci-pci
> [ 372.678765] usb 2-1.5: reset low-speed USB device number 3 using ehci-pci
> [ 373.398437] Clocksource tsc unstable (delta = -136457848 ns)
> [ 373.897503] Switched to clocksource hpet
> [ 373.897536] PM: resume of devices complete after 2143.535 msecs
> [ 373.898225] r8169 0000:07:00.0 eth0: link up
> [ 374.319311] Restarting tasks ... done.
> (And then nothing.)
>
> Interestingly I did see the "resume of devices" time grow on each resume
> again this time. I'll put the full dmesg dump in the same place like before:
> http://gaast.net/~wilmer/.lkml/
Checked that dmesg and console output, looks ok from last resume.
Can you put "debug ignore_loglevel" in boot command line?
So we can compare output from serial console between good one and bad
one directly.
Also did you try to remove r8169 every time before suspend?
Thanks
Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists