[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070601082140.GP32105@kernel.dk>
Date: Fri, 1 Jun 2007 10:21:41 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Tejun Heo <htejun@...il.com>
Cc: David Chinner <dgc@....com>, david@...g.hm,
Phillip Susi <psusi@....rr.com>, Neil Brown <neilb@...e.de>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
dm-devel@...hat.com, linux-raid@...r.kernel.org,
Stefan Bader <Stefan.Bader@...ibm.com>,
Andreas Dilger <adilger@...sterfs.com>
Subject: Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.
On Fri, Jun 01 2007, Tejun Heo wrote:
> Jens Axboe wrote:
> > On Thu, May 31 2007, David Chinner wrote:
> >> On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
> >>> On Thu, May 31 2007, David Chinner wrote:
> >>>> IOWs, there are two parts to the problem:
> >>>>
> >>>> 1 - guaranteeing I/O ordering
> >>>> 2 - guaranteeing blocks are on persistent storage.
> >>>>
> >>>> Right now, a single barrier I/O is used to provide both of these
> >>>> guarantees. In most cases, all we really need to provide is 1); the
> >>>> need for 2) is a much rarer condition but still needs to be
> >>>> provided.
> >>>>
> >>>>> if I am understanding it correctly, the big win for barriers is that you
> >>>>> do NOT have to stop and wait until the data is on persistant media before
> >>>>> you can continue.
> >>>> Yes, if we define a barrier to only guarantee 1), then yes this
> >>>> would be a big win (esp. for XFS). But that requires all filesystems
> >>>> to handle sync writes differently, and sync_blockdev() needs to
> >>>> call blkdev_issue_flush() as well....
> >>>>
> >>>> So, what do we do here? Do we define a barrier I/O to only provide
> >>>> ordering, or do we define it to also provide persistent storage
> >>>> writeback? Whatever we decide, it needs to be documented....
> >>> The block layer already has a notion of the two types of barriers, with
> >>> a very small amount of tweaking we could expose that. There's absolutely
> >>> zero reason we can't easily support both types of barriers.
> >> That sounds like a good idea - we can leave the existing
> >> WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
> >> behaviour that only guarantees ordering. The filesystem can then
> >> choose which to use where appropriate....
> >
> > Precisely. The current definition of barriers are what Chris and I came
> > up with many years ago, when solving the problem for reiserfs
> > originally. It is by no means the only feasible approach.
> >
> > I'll add a WRITE_ORDERED command to the #barrier branch, it already
> > contains the empty-bio barrier support I posted yesterday (well a
> > slightly modified and cleaned up version).
>
> Would that be very different from issuing barrier and not waiting for
> its completion? For ATA and SCSI, we'll have to flush write back cache
> anyway, so I don't see how we can get performance advantage by
> implementing separate WRITE_ORDERED. I think zero-length barrier
> (haven't looked at the code yet, still recovering from jet lag :-) can
> serve as genuine barrier without the extra write tho.
As always, it depends :-)
If you are doing pure flush barriers, then there's no difference. Unless
you only guarantee ordering wrt previously submitted requests, in which
case you can eliminate the post flush.
If you are doing ordered tags, then just setting the ordered bit is
enough. That is different from the barrier in that we don't need a flush
of FUA bit set.
In reality maybe the difference isn't all that great, at least we can
start by having WRITE_ORDERED == WRITE_BARRIER.
--
Jens Axboe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists