lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 30 Jul 2015 23:13:17 -0400
From:	Ming Lei <ming.lei@...onical.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Jens Axboe <axboe@...nel.dk>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Dave Kleikamp <dave.kleikamp@...cle.com>,
	Zach Brown <zab@...bo.net>,
	Maxim Patlasov <mpatlasov@...allels.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Tejun Heo <tj@...nel.org>, Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH v8 6/6] block: loop: support DIO & AIO

On Thu, Jul 30, 2015 at 12:42 PM, Christoph Hellwig <hch@...radead.org> wrote:
> On Thu, Jul 30, 2015 at 07:36:24AM -0400, Ming Lei wrote:
>> +     /*
>> +      * When working at direct I/O, under very unusual cases,
>> +      * such as unaligned direct I/O from application and
>> +      * access to loop block device with 'unaligned' offset & size,
>> +      * we have to fallback to non-dio mode.
>> +      *
>> +      * During the switch between dio and non-dio, page cache
>> +      * has to be flushed to the backing file.
>> +      */
>> +     if (unlikely(lo->use_dio && lo->last_use_dio != cmd->use_aio))
>> +             vfs_fsync(lo->lo_backing_file, 0);
>
> Filesystems do the cache flushing for you.

If it depends on specific filesystem, it is better to do that here explicitly
and the page cache is still written back just one time(either here or
filesystem).

>
>> +static inline bool req_dio_aligned(struct loop_device *lo,
>> +             const struct request *rq)
>> +{
>> +     return !((blk_rq_pos(rq) << 9) & lo->dio_align) &&
>> +             !(blk_rq_bytes(rq) & lo->dio_align);
>> +}
>> +
>>  static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
>>               const struct blk_mq_queue_data *bd)
>>  {
>> @@ -1554,6 +1658,13 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
>>       if (lo->lo_state != Lo_bound)
>>               return -EIO;
>>
>> +     if (lo->use_dio && !lo->transfer &&
>> +                     req_dio_aligned(lo, bd->rq) &&
>> +                     !(cmd->rq->cmd_flags & (REQ_FLUSH | REQ_DISCARD)))
>> +             cmd->use_aio = true;
>> +     else
>> +             cmd->use_aio = false;
>
> But honestly run time switching between buffered I/O and direct I/O from
> the same I/O stream is almost asking for triggering every possible
> race in the dio vs buffered I/O synchronization.  And there have been
> a lot of those..

Yes, I agree, that is why I am a bit reluctant to do runtime switch
in v7.

I think of another way to aovid the race by serializing dio/buffered I/O
strictly:

- before switching to buffered I/O from dio, wait for completion of all
pending dio
- run fsync()
- switch to buffered I/O

Then concurrent dio and buffered I/O can't be possible, what do you
think about this approach?

>
> I'd feel much more comfortable with a setup time check.

Yes, that is what v7 did.  But during setup time, we can't know
the request's size and offset, and the simplest way is to just
support 512-byte sector size of backing device for avoding
runtime switching.

Christoph and Dave, or do you have other better idea?

Thanks,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists