[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0812032034150.3256@nehalem.linux-foundation.org>
Date: Wed, 3 Dec 2008 20:40:58 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: "Rafael J. Wysocki" <rjw@...k.pl>
cc: Frans Pop <elendil@...net.nl>, Greg KH <greg@...ah.com>,
Ingo Molnar <mingo@...e.hu>, jbarnes@...tuousgeek.org,
lenb@...nel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
tiwai@...e.de, Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: Regression from 2.6.26: Hibernation (possibly suspend) broken
on Toshiba R500 (bisected)
On Thu, 4 Dec 2008, Rafael J. Wysocki wrote:
>
> Well, in principle it may be related to the way we handle bridges during
> resume
Ahh. Yes, that's possible. It's quite possible that the problem isn't
resource allocation per se, but just the bigger complexity at resume time.
This is a hibernate-only issue for you, right? Or is it about regular
suspend-to-ram too?
> but I really need to read some docs and compare them with the code
> before I can say anything more about that. Surely, nothing like this
> issue has ever been reported before.
Well, how stable has hibernate been on that particular machine
historically?
Because the half-revert alignment patch (ie reverting part of 5f17cf) that
made it work for you would actually have been a non-issue in the original
code that was pre-PCI-resource-alignment cleanup (commit 88452565).
So the patch you partially reverted was literally the one that made the
Cardbus allocation work the _same_ way as it did historically, before
88452565. So if the new code breaks for you, then so should the "old" code
(ie 2.6.25 and earlier).
So the "hasn't been reported before" case may well be just another way of
saying "hibernate has never been very reliable".
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists