[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200812050120.02270.rjw@sisk.pl>
Date: Fri, 5 Dec 2008 01:20:01 +0100
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Frans Pop <elendil@...net.nl>, Greg KH <greg@...ah.com>,
Ingo Molnar <mingo@...e.hu>, jbarnes@...tuousgeek.org,
lenb@...nel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
tiwai@...e.de, Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: Regression from 2.6.26: Hibernation (possibly suspend) broken on Toshiba R500 (bisected)
On Friday, 5 of December 2008, Linus Torvalds wrote:
>
> On Fri, 5 Dec 2008, Rafael J. Wysocki wrote:
> > >
> > > Yes. And in the case of Frans' machine, the e1000e controller was before
> > > all the bridges too.
> >
> > Hm. And unloading it before suspend made things work? Interesting.
>
> Yeah. Frans' workaround was
>
> - unloading e1000e before suspend
> - using aggressive powersave setting on snd_hda_intel to ensure that
> sound controller was already sleeping before entering suspend
>
> and both of those devices are on the root PCI bus and are enumerated (and
> thus resumed) before the transparent bridge.
>
> So yeah, the whole "resource allocation for that bridge" saga should
> _really_ not matter. But it clearly does seem to.
Well, I'm going to have a closer look at what we're doing to PCI bridges in the
resume code path, as this _feels_ relevant in this case.
Perhaps we're not doing something we're supposed to do (that already happened
for regular devices in the past) or we're doing something we're not supposed
to do. Unfortunately, I'd have to dig into the PCI bridge spec for this
purpose and that will take time. Still, I suspect that's worth doing, as
potentially the problem may affect a wide range of systems.
The fact that I have a box on which I can reproduce the problem should help
here. ;-)
Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists