[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110523175204.GA21110@infradead.org>
Date: Mon, 23 May 2011 13:52:04 -0400
From: Christoph Hellwig <hch@...radead.org>
To: Alex Bligh <alex@...x.org.uk>
Cc: Jan Kara <jack@...e.cz>, linux-kernel@...r.kernel.org,
Christoph Hellwig <hch@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Andreas Dilger <adilger.kernel@...ger.ca>,
Theodore Ts'o <tytso@....edu>
Subject: Re: BUG: Failure to send REQ_FLUSH on unmount on ext3, ext4, and FS
in general
On Mon, May 23, 2011 at 06:39:23PM +0100, Alex Bligh wrote:
> I'm presuming that if just umount() were altered to do a REQ_FLUSH,
> the potential presence of 2 sync()s would not be too offensive, as
> unmount isn't exactly time critical, and as Christoph pointed out in
> the other thread, a REQ_FLUSH when the write cache has recently been
> emptied isn't going to take long.
Umount actually is the only place where adding it generically makes
sense. It's not time-critical, and with kill_block_super we actually
have a block specific place to put it, instead of having to hack
it into the generic VFS, which is something we've been trying to avoid.
> Ah, fsdevel not here. OK. Partly I'd like to understand whether
> sync() not flushing write caches on barrier-less file systems
> is a good thing or a bad thing. I know barriers are better, but if
> writing to (e.g.) FAT32, I'm betting there is little prospect of
> barrier support.
"Barrier" support it's gone. It's really just the FUA and FLUSH
flags these days. For transactional filesystem these need to be
used to guarantee transaction integrity, but for all others just
adding one blkdev_issue_flush call to ->fsync and ->sync_fs is
enough. That's discounting filesystem that use multiple block
devices, which are a bit more complicated.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists