[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.0707231120380.3545-100000@iolanthe.rowland.org>
Date: Mon, 23 Jul 2007 11:23:15 -0400 (EDT)
From: Alan Stern <stern@...land.harvard.edu>
To: nigel@...pend2.net
cc: Paul Mackerras <paulus@...ba.org>, <david@...g.hm>,
Miklos Szeredi <miklos@...redi.hu>,
<linux-kernel@...r.kernel.org>, <miltonm@....com>,
<ying.huang@...el.com>, <linux-pm@...ts.linux-foundation.org>,
Jeremy Maitin-Shepard <jbms@....edu>
Subject: Re: [linux-pm] Re: Hibernation considerations
On Mon, 23 Jul 2007, Nigel Cunningham wrote:
> Take a step back for a second.
>
> The problem we're facing now is that we're getting some userspace threads,
> used in processing I/O, that are functioning as exceptions to the "freeze
> userspace, then freezeable kernel threads" rule. They are only exceptions
> because of that role in processing I/O - because they're de facto kernel
> threads. So, if we orient our thinking more in terms of I/O processing and
> less in terms of the userspace/kernelspace distinction, we'll have a
> solution:
>
> 1) Freeze processes that aren't fs related (ie stop them generating I/O).
The problem here is that with things like FUSE, _every_ process is
potentially fs related. Nothing prevents a FUSE thread from doing IPC
with any other thread.
> 2) Flush pending I/O.
> 3) Freeze filesystems in reverse order of dependency, the primary purpose
> being to stop them generating further I/O on their metadata.
>
> Locks that are being held are only being held because work is being done. If
> we progressively focus on threads in terms of their create/process work
> dependencies, we'll see that the problem isn't at all intractable.
As has been mentioned before, keeping track of all that dependency
information would be very fragile and time-consuming.
Alan Stern
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists