[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061112223012.GH11034@melbourne.sgi.com>
Date: Mon, 13 Nov 2006 09:30:13 +1100
From: David Chinner <dgc@....com>
To: Pavel Machek <pavel@....cz>
Cc: David Chinner <dgc@....com>, "Rafael J. Wysocki" <rjw@...k.pl>,
Alasdair G Kergon <agk@...hat.com>,
Eric Sandeen <sandeen@...hat.com>,
Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org,
dm-devel@...hat.com, Srinivasa DS <srinivasa@...ibm.com>,
Nigel Cunningham <nigel@...pend2.net>
Subject: Re: [PATCH 2.6.19 5/5] fs: freeze_bdev with semaphore not mutex
On Fri, Nov 10, 2006 at 11:39:42AM +0100, Pavel Machek wrote:
> On Fri 2006-11-10 11:57:49, David Chinner wrote:
> > On Thu, Nov 09, 2006 at 11:21:46PM +0100, Rafael J. Wysocki wrote:
> > > I think we can add a flag to __create_workqueue() that will indicate if
> > > this one is to be running with PF_NOFREEZE and a corresponding macro like
> > > create_freezable_workqueue() to be used wherever we want the worker thread
> > > to freeze (in which case it should be calling try_to_freeze() somewhere).
> > > Then, we can teach filesystems to use this macro instead of
> > > create_workqueue().
> >
> > At what point does the workqueue get frozen? i.e. how does this
> > guarantee an unfrozen filesystem will end up in a consistent
> > state?
>
> Snapshot is atomic; workqueue will be unfrozen with everyone else, but
> as there were no writes in the meantime, there should be no problems.
That doesn't answer my question - when in the sequence of freezing
do you propose diasbling the workqueues? before the kernel threads,
after the kernel threads, before you sync the filesystem? And how does
freezing them at that point in time guarantee consistent filesystem state?
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists