[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVOyVi8hyxU5yv=heSy2SpjGAY9MKi3kow_gweFn8_tL-A@mail.gmail.com>
Date: Fri, 7 Aug 2015 04:25:52 -0400
From: Ming Lei <ming.lei@...onical.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Jens Axboe <axboe@...nel.dk>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Dave Kleikamp <dave.kleikamp@...cle.com>,
Zach Brown <zab@...bo.net>,
Maxim Patlasov <mpatlasov@...allels.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Tejun Heo <tj@...nel.org>, Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH v9 6/6] block: loop: support DIO & AIO
On Fri, Aug 7, 2015 at 3:43 AM, Christoph Hellwig <hch@...radead.org> wrote:
> I really disagree with the per-cmd use_dio tracking.
Could you explain it in a bit?
>
> If we know at setup time that the loop device sector size is smaller
> than the sector size of the underlying device we should never allow
> dio, and othewise it should always work for data.
Yes, that is just what I did in v7, and we can only do dio in case
of 512 byte sector size of backing device(not considering the
following patches from Hannes).
When sector size of backing device isn't 512, most of transfer(buffered I/O
and normal dio) is still 4k aligned, that is why I suggest to use per-cmd
use_dio tracking.
The patch avoids the race between buffered io and dio, doesn't it?
The introduced cost is trivial and most of times it needn't to wait for
completion of pending dio.
>
> The ->transfer check also is one to be done at setup time, and there
OK.
> is no need for draining or mode checking for an fsync - FLUSH is always
> only guranteed to flush out I/O that has completed by the time it's
> issued.
Could you point it out in the patch?
Thanks,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists