[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201107101108.46688.rjw@sisk.pl>
Date: Sun, 10 Jul 2011 11:08:46 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Linux PM mailing list <linux-pm@...ts.linux-foundation.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Kevin Hilman <khilman@...com>,
Alan Stern <stern@...land.harvard.edu>,
MyungJoo Ham <myungjoo.ham@...sung.com>,
Chanwoo Choi <cw00.choi@...sung.com>,
Paul Walmsley <paul@...an.com>, Greg KH <gregkh@...e.de>,
Magnus Damm <magnus.damm@...il.com>
Subject: [PATCH 5/6 v2] PM / Domains: Do not restore all devices on power off error
From: Rafael J. Wysocki <rjw@...k.pl>
Since every device in a PM domain has its own need_restore
flag, which is set by __pm_genpd_save_device(), there's no need to
walk the domain's device list and restore all devices on an error
from one of the drivers' .runtime_suspend() callbacks.
Signed-off-by: Rafael J. Wysocki <rjw@...k.pl>
---
drivers/base/power/domain.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
Index: linux-2.6/drivers/base/power/domain.c
===================================================================
--- linux-2.6.orig/drivers/base/power/domain.c
+++ linux-2.6/drivers/base/power/domain.c
@@ -269,8 +269,10 @@ static int pm_genpd_poweroff(struct gene
list_for_each_entry_reverse(dle, &genpd->dev_list, node) {
ret = __pm_genpd_save_device(dle, genpd);
- if (ret)
- goto err_dev;
+ if (ret) {
+ genpd_set_active(genpd);
+ goto out;
+ }
if (genpd_abort_poweroff(genpd))
goto out;
@@ -311,13 +313,6 @@ static int pm_genpd_poweroff(struct gene
genpd->poweroff_task = NULL;
wake_up_all(&genpd->status_wait_queue);
return ret;
-
- err_dev:
- list_for_each_entry_continue(dle, &genpd->dev_list, node)
- __pm_genpd_restore_device(dle, genpd);
-
- genpd_set_active(genpd);
- goto out;
}
/**
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists