[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070505114253.GA5119@ucw.cz>
Date: Sat, 5 May 2007 11:42:54 +0000
From: Pavel Machek <pavel@....cz>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: "Rafael J. Wysocki" <rjw@...k.pl>,
Nigel Cunningham <nigel@...el.suspend2.net>,
Pekka J Enberg <penberg@...helsinki.fi>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: Back to the future.
Hi!
> > The "let's stop all kernel threads" is superstition. It's the same kind of
> > superstition that made people write "sync" three times before turning off
> > the power in the olden times. It's the kind of superstition that comes
> > from "we don't do things right, so let's be vewy vewy quiet and _pray_
> > that it works when we are beign quiet".
>
> Side note: while I think things should probably *work* even with user
> processes going full bore while a snapshot it taken, I'll freely admit
> that I'll follow that superstition far enough that I think it's probably a
> good idea to try to quiesce the system to _some_ degree, and that stopping
> user programs is a good idea. Partly because the whole memory shrinking
> thing, and partly just because we should do the snapshot with hw IO queues
> empty.
>
> But I don't think it would necessarily be wrong (and in many ways it would
> probably be *right*) to do that IO queue stopping at the queue level
> rather than at a process level. Why stop processes just becasue you want
> to clean out IO queues? They are two totally different things!
Actually, I'd like to stop I/O queues; if there was easy way to do
that, I'll happily switch. Notice that we'll need to stop 'I/O queues'
of the char devices, too...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists