lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150730164233.GA14158@infradead.org>
Date:	Thu, 30 Jul 2015 09:42:33 -0700
From:	Christoph Hellwig <hch@...radead.org>
To:	Ming Lei <ming.lei@...onical.com>
Cc:	Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org,
	Dave Kleikamp <dave.kleikamp@...cle.com>,
	Zach Brown <zab@...bo.net>,
	Christoph Hellwig <hch@...radead.org>,
	Maxim Patlasov <mpatlasov@...allels.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Tejun Heo <tj@...nel.org>, Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH v8 6/6] block: loop: support DIO & AIO

On Thu, Jul 30, 2015 at 07:36:24AM -0400, Ming Lei wrote:
> +	/*
> +	 * When working at direct I/O, under very unusual cases,
> +	 * such as unaligned direct I/O from application and
> +	 * access to loop block device with 'unaligned' offset & size,
> +	 * we have to fallback to non-dio mode.
> +	 *
> +	 * During the switch between dio and non-dio, page cache
> +	 * has to be flushed to the backing file.
> +	 */
> +	if (unlikely(lo->use_dio && lo->last_use_dio != cmd->use_aio))
> +		vfs_fsync(lo->lo_backing_file, 0);

Filesystems do the cache flushing for you.

> +static inline bool req_dio_aligned(struct loop_device *lo,
> +		const struct request *rq)
> +{
> +	return !((blk_rq_pos(rq) << 9) & lo->dio_align) &&
> +		!(blk_rq_bytes(rq) & lo->dio_align);
> +}
> +
>  static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
>  		const struct blk_mq_queue_data *bd)
>  {
> @@ -1554,6 +1658,13 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
>  	if (lo->lo_state != Lo_bound)
>  		return -EIO;
>  
> +	if (lo->use_dio && !lo->transfer &&
> +			req_dio_aligned(lo, bd->rq) &&
> +			!(cmd->rq->cmd_flags & (REQ_FLUSH | REQ_DISCARD)))
> +		cmd->use_aio = true;
> +	else
> +		cmd->use_aio = false;

But honestly run time switching between buffered I/O and direct I/O from
the same I/O stream is almost asking for triggering every possible
race in the dio vs buffered I/O synchronization.  And there have been
a lot of those..

I'd feel much more comfortable with a setup time check.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ