[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070527184402.GA21161@srcf.ucam.org>
Date: Sun, 27 May 2007 19:44:02 +0100
From: Matthew Garrett <mjg59@...f.ucam.org>
To: "Rafael J. Wysocki" <rjw@...k.pl>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Pavel Machek <pavel@....cz>,
Romano Giannetti <romanol@...omillas.es>,
Chris Wright <chrisw@...s-sol.org>,
Chuck Ebbert <cebbert@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
stable@...nel.org, Justin Forbes <jmforbes@...uxtx.org>,
Zwane Mwaikambo <zwane@....linux.org.uk>,
Theodore Ts'o <tytso@....edu>,
Randy Dunlap <rdunlap@...otime.net>,
Dave Jones <davej@...hat.com>,
Chuck Wolber <chuckw@...ntumlinux.com>,
Chris Wedgwood <reviews@...cw.f00f.org>,
Michael Krufky <mkrufky@...uxtv.org>,
akpm@...ux-foundation.org, alan@...rguk.ukuu.org.uk
Subject: Re: pcmcia resume 60 second hang. Re: [patch 00/69] -stable review
On Sun, May 27, 2007 at 08:32:14PM +0200, Rafael J. Wysocki wrote:
> In particular, please see this message:
>
> https://lists.linux-foundation.org/pipermail/linux-pm/2007-May/012301.html
Yes, there's also the notifier chain for the hardware. However, very few
drivers seem to use that - adb seems to be the only one still in the
tree. For everything else, the device tree is used in exactly the same
way as on x86. If it's safe on Macs but not on x86, then (as far as I
can tell) it looks like it's only by luck.
Anyway. I've tested the following patch on a dual-core x86. No obvious
issues yet, but I'll try to put it through a few hundred cycles.
diff --git a/include/linux/pm.h b/include/linux/pm.h
diff --git a/kernel/power/main.c b/kernel/power/main.c
index 8812985..1db3012 100644
--- a/kernel/power/main.c
+++ b/kernel/power/main.c
@@ -19,7 +19,6 @@
#include <linux/console.h>
#include <linux/cpu.h>
#include <linux/resume-trace.h>
-#include <linux/freezer.h>
#include <linux/vmstat.h>
#include "power.h"
@@ -66,9 +65,10 @@ static inline void pm_finish(suspend_state_t state)
* suspend_prepare - Do prep work before entering low-power state.
* @state: State we're entering.
*
- * This is common code that is called for each state that we're
- * entering. Allocate a console, stop all processes, then make sure
- * the platform can enter the requested state.
+ * This is common code that is called for each state that we're
+ * entering. Allocate a console, then make sure the platform can
+ * enter the requested state. This is not called for
+ * suspend-to-disk.
*/
static int suspend_prepare(suspend_state_t state)
@@ -81,11 +81,6 @@ static int suspend_prepare(suspend_state_t state)
pm_prepare_console();
- if (freeze_processes()) {
- error = -EAGAIN;
- goto Thaw;
- }
-
if ((free_pages = global_page_state(NR_FREE_PAGES))
< FREE_PAGE_NUMBER) {
pr_debug("PM: free some memory\n");
@@ -93,7 +88,7 @@ static int suspend_prepare(suspend_state_t state)
if (nr_free_pages() < FREE_PAGE_NUMBER) {
error = -ENOMEM;
printk(KERN_ERR "PM: No enough memory\n");
- goto Thaw;
+ goto Exit;
}
}
@@ -118,8 +113,7 @@ static int suspend_prepare(suspend_state_t state)
device_resume();
Resume_console:
resume_console();
- Thaw:
- thaw_processes();
+ Exit:
pm_restore_console();
return error;
}
@@ -160,8 +154,8 @@ int suspend_enter(suspend_state_t state)
* suspend_finish - Do final work before exiting suspend sequence.
* @state: State we're coming out of.
*
- * Call platform code to clean up, restart processes, and free the
- * console that we've allocated. This is not called for suspend-to-disk.
+ * Call platform code to clean up and free the console that we've
+ * allocated. This is not called for suspend-to-disk.
*/
static void suspend_finish(suspend_state_t state)
@@ -170,7 +164,6 @@ static void suspend_finish(suspend_state_t state)
pm_finish(state);
device_resume();
resume_console();
- thaw_processes();
pm_restore_console();
}
--
Matthew Garrett | mjg59@...f.ucam.org
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists