[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200901270948.36109.jbarnes@virtuousgeek.org>
Date: Tue, 27 Jan 2009 09:48:35 -0800
From: Jesse Barnes <jbarnes@...tuousgeek.org>
To: "Rafael J. Wysocki" <rjw@...k.pl>
Cc: Linux PCI <linux-pci@...r.kernel.org>,
pm list <linux-pm@...ts.linux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] PCI PM: Fix suspend error paths and testing facility breakage
On Monday, January 26, 2009 12:43 pm Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rjw@...k.pl>
>
> If one of device drivers refuses to suspend by returning error code
> from its ->suspend() callback, the devices that have already been
> suspended are resumed by executing their drivers' ->resume()
> callbacks. Some of these callbacks expect the device's
> configuration space to be restored if the device has been put into
> D3 before they are called. Unfortunately, this mechanism has been
> broken by recent changes moving the restoration of config spaces
> of some devices (most importantly, USB controllers and HDA Intel)
> into the resume callbacks executed with interrupts off. Obviously,
> these callbacks are not invoked in the suspend error path and, as a
> result, the system cannot be successfully brought back into the
> working state in case of a suspend error. The same thing happens
> in the hibernation error path right before putting the system into
> S4.
>
> Similarly, the suspend testing facility associated with the
> /sys/power/pm_test file is broken, because it uses the very same
> mechanism that is used in the suspend and hibernation error paths.
>
> Fix the breakage by making the PCI core restore the configuration
> spaces of PCI devices that haven't been restored already before
> pci_pm_resume() is called for those devices by the PM core.
Applied to my for-linus branch, thanks.
--
Jesse Barnes, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists