[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1105191021220.1900-100000@iolanthe.rowland.org>
Date: Thu, 19 May 2011 10:25:12 -0400 (EDT)
From: Alan Stern <stern@...land.harvard.edu>
To: Davide Ciminaghi <ciminaghi@...dd.com>
cc: "Rafael J. Wysocki" <rjw@...k.pl>,
<davinci-linux-open-source@...ux.davincidsp.com>,
Greg Kroah-Hartman <gregkh@...e.de>,
<linux-kernel@...r.kernel.org>,
Raffaele Recalcati <raffaele.recalcati@...cino.it>,
<linux-pm@...ts.linux-foundation.org>,
Raffaele Recalcati <lamiaposta71@...il.com>
Subject: Re: [linux-pm] [PATCH 2/4] PM / Loss: power loss management
On Thu, 19 May 2011, Davide Ciminaghi wrote:
> I'm not completely sure about this. What we wanted to do was to avoid powering
> down the mmc while it is physically writing data into its internal memory.
> If we force a sync when the power loss warning event warning happens,
> it is very difficult to be able to guarantee that all buffered data will be
> written before power actually dies. So we preferred to follow another strategy:
> let the mmc finish any running write operation, and then stop its request
> queue. If power really goes down, then we hope that the file system journal
> will fix things on next boot (yes, some data could get lost, but the fs should
> still be mountable). On the other hand, if power resumes, nothing bad should
> happen for user space processes.
You could consider a totally different approach.
Each platform will have a different set of high-power devices it wants
to turn off when a power-loss warning occurs. So instead of changing
the core PM interface, you could add a new "power_loss" notifier list.
Only the most critical drivers would need to listen for notifications,
and this could be different drivers on different platforms.
Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists