[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080516221506.GD15334@shareable.org>
Date: Fri, 16 May 2008 23:15:06 +0100
From: Jamie Lokier <jamie@...reable.org>
To: Eric Sandeen <sandeen@...hat.com>
Cc: ext4 development <linux-ext4@...r.kernel.org>,
linux-kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 2/4] ext3: call blkdev_issue_flush on fsync
Eric Sandeen wrote:
> To ensure that bits are truly on-disk after an fsync,
> ext3 should call blkdev_issue_flush if barriers are supported.
>
> Inspired by an old thread on barriers, by reiserfs & xfs
> which do the same, and by a patch SuSE ships with their kernel
blkdev_issue_flush uses BIO_RW_BARRIER, which becomes REQ_HARDBARRIER.
REQ_HARDBARRIER does _not_ mean force the data to disk.
It is a barrier request, which when optimised for the highest
performance can return a long time before the data is physically on
the disk. (It can even do nothing, correctly).
In many cases, block drivers implement a barrier using a hardware
flush command, so it turns out the same. Then your patch is effective.
(It is however sending unnecessary hardware commands sometimes,
because fsync's journal write (when there is one) will also issue a
flush command, and another is not required; it may reduce performance.)
But that is not what REQ_HARDBARRIER means: block drivers are allowed
to implement barriers in other ways which are more efficient.
I'm not sure if this problem is theoretical only, or if any block
drivers actually do take advantage of this.
A good fix would be to add REQ_HARDFLUSH, with a precise meaning:
don't return until the hardware reports the data safely committed. It
would be very little change to most (if not all) current drivers.
-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists