[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49510A34.10102@redhat.com>
Date: Tue, 23 Dec 2008 09:56:36 -0600
From: Eric Sandeen <sandeen@...hat.com>
To: Andreas Dilger <adilger@....com>
CC: Ric Wheeler <rwheeler@...hat.com>, Jan Kara <jack@...e.cz>,
Arthur Jones <ajones@...erbed.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
"sct@...hat.com" <sct@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] ext3: wait on all pending commits in ext3_sync_fs
Andreas Dilger wrote:
> On Dec 22, 2008 14:15 -0500, Ric Wheeler wrote:
>> Without having dived into the patch in detail, one worry I would have is
>> that we still might care to spin up a drive for empty transactions in
>> order to invalidate the drive's write cache.
>>
>> For example, if we have the following sequence:
>>
>> (1) user app performs series of writes to file A
>> (2) pages dirtied from writes to A are destaged to the disk over time
>> (3) user app issues fsync(file A) to make sure that the data will
>> survive a power outage
>>
>> At this point in time, would this change prevent us from spinning up the
>> drive and invalidating the disk write cache for that fsync() ?
>
> Well, if the writes themselves didn't spin up the drive, it is uncertain
> whether the write of the journal commit block would be any more helpful
> in getting that to happen.
So, ext4_sync_file() calls blkdev_issue_flush() which would should do
the right thing even if the drive is spun down, I think (rather than
hoping that some other journal activity would flush this out...)
I guess I don't know for sure what blkdev_issue_flush does on a
spun-down drive but I'd hope it does the right thing.
Pretty sure I sent a patch for ext3 to do the same, but it was
ignored/dropped/forgotten along with the barriers-by-default patch.
Suppose I could try again.
-Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists