lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BL0PR04MB65143B60B6062996764423C3E7BC9@BL0PR04MB6514.namprd04.prod.outlook.com>
Date:   Tue, 26 Jan 2021 04:06:06 +0000
From:   Damien Le Moal <Damien.LeMoal@....com>
To:     Ming Lei <ming.lei@...hat.com>,
        Changheun Lee <nanich.lee@...sung.com>
CC:     Johannes Thumshirn <Johannes.Thumshirn@....com>,
        "asml.silence@...il.com" <asml.silence@...il.com>,
        "axboe@...nel.dk" <axboe@...nel.dk>,
        "linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "osandov@...com" <osandov@...com>,
        "patchwork-bot@...nel.org" <patchwork-bot@...nel.org>,
        "tj@...nel.org" <tj@...nel.org>,
        "tom.leiming@...il.com" <tom.leiming@...il.com>,
        "jisoo2146.oh@...sung.com" <jisoo2146.oh@...sung.com>,
        "junho89.kim@...sung.com" <junho89.kim@...sung.com>,
        "mj0123.lee@...sung.com" <mj0123.lee@...sung.com>,
        "seunghwan.hyun@...sung.com" <seunghwan.hyun@...sung.com>,
        "sookwan7.kim@...sung.com" <sookwan7.kim@...sung.com>,
        "woosung2.lee@...sung.com" <woosung2.lee@...sung.com>,
        "yt0928.kim@...sung.com" <yt0928.kim@...sung.com>
Subject: Re: [PATCH v3 1/2] bio: limit bio max size

On 2021/01/26 12:58, Ming Lei wrote:
> On Tue, Jan 26, 2021 at 10:32:34AM +0900, Changheun Lee wrote:
>> bio size can grow up to 4GB when muli-page bvec is enabled.
>> but sometimes it would lead to inefficient behaviors.
>> in case of large chunk direct I/O, - 32MB chunk read in user space -
>> all pages for 32MB would be merged to a bio structure if the pages
>> physical addresses are contiguous. it makes some delay to submit
>> until merge complete. bio max size should be limited to a proper size.
>>
>> When 32MB chunk read with direct I/O option is coming from userspace,
>> kernel behavior is below now. it's timeline.
>>
>>  | bio merge for 32MB. total 8,192 pages are merged.
>>  | total elapsed time is over 2ms.
>>  |------------------ ... ----------------------->|
>>                                                  | 8,192 pages merged a bio.
>>                                                  | at this time, first bio submit is done.
>>                                                  | 1 bio is split to 32 read request and issue.
>>                                                  |--------------->
>>                                                   |--------------->
>>                                                    |--------------->
>>                                                               ......
>>                                                                    |--------------->
>>                                                                     |--------------->|
>>                           total 19ms elapsed to complete 32MB read done from device. |
>>
>> If bio max size is limited with 1MB, behavior is changed below.
>>
>>  | bio merge for 1MB. 256 pages are merged for each bio.
>>  | total 32 bio will be made.
>>  | total elapsed time is over 2ms. it's same.
>>  | but, first bio submit timing is fast. about 100us.
>>  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
>>       | 256 pages merged a bio.
>>       | at this time, first bio submit is done.
>>       | and 1 read request is issued for 1 bio.
>>       |--------------->
>>            |--------------->
>>                 |--------------->
>>                                       ......
>>                                                  |--------------->
>>                                                   |--------------->|
>>         total 17ms elapsed to complete 32MB read done from device. |
>>
>> As a result, read request issue timing is faster if bio max size is limited.
>> Current kernel behavior with multipage bvec, super large bio can be created.
>> And it lead to delay first I/O request issue.
>>
>> Signed-off-by: Changheun Lee <nanich.lee@...sung.com>
>> ---
>>  block/bio.c            | 17 ++++++++++++++++-
>>  include/linux/bio.h    |  4 +++-
>>  include/linux/blkdev.h |  3 +++
>>  3 files changed, 22 insertions(+), 2 deletions(-)
>>
>> diff --git a/block/bio.c b/block/bio.c
>> index 1f2cc1fbe283..ec0281889045 100644
>> --- a/block/bio.c
>> +++ b/block/bio.c
>> @@ -287,6 +287,21 @@ void bio_init(struct bio *bio, struct bio_vec *table,
>>  }
>>  EXPORT_SYMBOL(bio_init);
>>  
>> +unsigned int bio_max_size(struct bio *bio)
>> +{
>> +	struct request_queue *q;
>> +
>> +	if (!bio->bi_disk)
>> +		return UINT_MAX;
>> +
>> +	q = bio->bi_disk->queue;
>> +	if (!blk_queue_limit_bio_size(q))
>> +		return UINT_MAX;
>> +
>> +	return blk_queue_get_max_sectors(q, bio_op(bio)) << SECTOR_SHIFT;
>> +}
>> +EXPORT_SYMBOL(bio_max_size);
>> +
>>  /**
>>   * bio_reset - reinitialize a bio
>>   * @bio:	bio to reset
>> @@ -877,7 +892,7 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
>>  		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
>>  
>>  		if (page_is_mergeable(bv, page, len, off, same_page)) {
>> -			if (bio->bi_iter.bi_size > UINT_MAX - len) {
>> +			if (bio->bi_iter.bi_size > bio_max_size(bio) - len) {
>>  				*same_page = false;
>>  				return false;
>>  			}
> 
> So far we don't need bio->bi_disk or bio->bi_bdev(will be changed in
> Christoph's patch) during adding page to bio, so there is null ptr
> refereance risk.
> 
>> diff --git a/include/linux/bio.h b/include/linux/bio.h
>> index 1edda614f7ce..cdb134ca7bf5 100644
>> --- a/include/linux/bio.h
>> +++ b/include/linux/bio.h
>> @@ -100,6 +100,8 @@ static inline void *bio_data(struct bio *bio)
>>  	return NULL;
>>  }
>>  
>> +extern unsigned int bio_max_size(struct bio *);
>> +
>>  /**
>>   * bio_full - check if the bio is full
>>   * @bio:	bio to check
>> @@ -113,7 +115,7 @@ static inline bool bio_full(struct bio *bio, unsigned len)
>>  	if (bio->bi_vcnt >= bio->bi_max_vecs)
>>  		return true;
>>  
>> -	if (bio->bi_iter.bi_size > UINT_MAX - len)
>> +	if (bio->bi_iter.bi_size > bio_max_size(bio) - len)
>>  		return true;
>>  
>>  	return false;
>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>> index f94ee3089e01..3aeab9e7e97b 100644
>> --- a/include/linux/blkdev.h
>> +++ b/include/linux/blkdev.h
>> @@ -621,6 +621,7 @@ struct request_queue {
>>  #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
>>  #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
>>  #define QUEUE_FLAG_NOWAIT       29	/* device supports NOWAIT */
>> +#define QUEUE_FLAG_LIMIT_BIO_SIZE 30	/* limit bio size */
>>  
>>  #define QUEUE_FLAG_MQ_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
>>  				 (1 << QUEUE_FLAG_SAME_COMP) |		\
>> @@ -667,6 +668,8 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
>>  #define blk_queue_fua(q)	test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)
>>  #define blk_queue_registered(q)	test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags)
>>  #define blk_queue_nowait(q)	test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags)
>> +#define blk_queue_limit_bio_size(q)	\
>> +	test_bit(QUEUE_FLAG_LIMIT_BIO_SIZE, &(q)->queue_flags)
> 
> I don't think it is a good idea by adding queue flag for this purpose,
> since this case just needs to limit bio size for not delay bio submission
> too much, which is kind of logical thing, nothing to do with request queue.
> 
> Just wondering why you don't take the following way:
> 
> 
> diff --git a/block/bio.c b/block/bio.c
> index 99040a7e6656..35852f7f47d4 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -1081,7 +1081,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
>   * It's intended for direct IO, so doesn't do PSI tracking, the caller is
>   * responsible for setting BIO_WORKINGSET if necessary.
>   */
> -int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> +int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter, bool sync)
>  {
>  	int ret = 0;
>  
> @@ -1092,12 +1092,20 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
>  		bio_set_flag(bio, BIO_NO_PAGE_REF);
>  		return 0;
>  	} else {
> +		/*
> +		 * Don't add too many pages in case of sync dio for
> +		 * avoiding delay bio submission too much especially
> +		 * pinning user pages in memory isn't cheap.
> +		 */
> +		const unsigned int max_size = sync ? (1U << 12) : UINT_MAX;

4KB max bio size ? That is a little small :)
In any case, I am not a fan of using an arbitrary value not related to the
actual device characteristics. Wouldn't it be better to us the device
max_sectors limit ? And that limit would need to be zone_append_max_sectors for
zone append writes. So some helper like Changheun bio_max_size() may be useful.

Apart from this point, I like your approach.

> +
>  		do {
>  			if (bio_op(bio) == REQ_OP_ZONE_APPEND)
>  				ret = __bio_iov_append_get_pages(bio, iter);
>  			else
>  				ret = __bio_iov_iter_get_pages(bio, iter);
> -		} while (!ret && iov_iter_count(iter) && !bio_full(bio, 0));
> +		} while (!ret && iov_iter_count(iter) && !bio_full(bio, 0) &&
> +				bio->bi_iter.bi_size < max_size);
>  	}
>  
>  	/* don't account direct I/O as memory stall */
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 6f5bd9950baf..0d1d436aca17 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -246,7 +246,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
>  	bio.bi_end_io = blkdev_bio_end_io_simple;
>  	bio.bi_ioprio = iocb->ki_ioprio;
>  
> -	ret = bio_iov_iter_get_pages(&bio, iter);
> +	ret = bio_iov_iter_get_pages(&bio, iter, true);
>  	if (unlikely(ret))
>  		goto out;
>  	ret = bio.bi_iter.bi_size;
> @@ -397,7 +397,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>  		bio->bi_end_io = blkdev_bio_end_io;
>  		bio->bi_ioprio = iocb->ki_ioprio;
>  
> -		ret = bio_iov_iter_get_pages(bio, iter);
> +		ret = bio_iov_iter_get_pages(bio, iter, is_sync);
>  		if (unlikely(ret)) {
>  			bio->bi_status = BLK_STS_IOERR;
>  			bio_endio(bio);
> diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
> index ea1e8f696076..5105982a9bf8 100644
> --- a/fs/iomap/direct-io.c
> +++ b/fs/iomap/direct-io.c
> @@ -277,7 +277,8 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
>  		bio->bi_private = dio;
>  		bio->bi_end_io = iomap_dio_bio_end_io;
>  
> -		ret = bio_iov_iter_get_pages(bio, dio->submit.iter);
> +		ret = bio_iov_iter_get_pages(bio, dio->submit.iter,
> +				is_sync_kiocb(dio->iocb));
>  		if (unlikely(ret)) {
>  			/*
>  			 * We have to stop part way through an IO. We must fall
> diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
> index bec47f2d074b..c95ac37f9305 100644
> --- a/fs/zonefs/super.c
> +++ b/fs/zonefs/super.c
> @@ -690,7 +690,7 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from)
>  	if (iocb->ki_flags & IOCB_DSYNC)
>  		bio->bi_opf |= REQ_FUA;
>  
> -	ret = bio_iov_iter_get_pages(bio, from);
> +	ret = bio_iov_iter_get_pages(bio, from, is_sync_kiocb(iocb));
>  	if (unlikely(ret))
>  		goto out_release;
>  
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index 676870b2c88d..fa3a503b955c 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -472,7 +472,7 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
>  		unsigned int len, unsigned int off, bool *same_page);
>  void __bio_add_page(struct bio *bio, struct page *page,
>  		unsigned int len, unsigned int off);
> -int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter);
> +int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter, bool sync);
>  void bio_release_pages(struct bio *bio, bool mark_dirty);
>  extern void bio_set_pages_dirty(struct bio *bio);
>  extern void bio_check_pages_dirty(struct bio *bio);
> 
> 
> Thanks,
> Ming
> 
> 


-- 
Damien Le Moal
Western Digital Research

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ