[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM6PR04MB70819D1D9C6301E399FC1A67E73D9@DM6PR04MB7081.namprd04.prod.outlook.com>
Date: Wed, 2 Jun 2021 12:51:17 +0000
From: Damien Le Moal <Damien.LeMoal@....com>
To: Changheun Lee <nanich.lee@...sung.com>,
"hch@...radead.org" <hch@...radead.org>,
Johannes Thumshirn <Johannes.Thumshirn@....com>,
"alex_y_xu@...oo.ca" <alex_y_xu@...oo.ca>,
"asml.silence@...il.com" <asml.silence@...il.com>,
"axboe@...nel.dk" <axboe@...nel.dk>,
"bgoncalv@...hat.com" <bgoncalv@...hat.com>,
"bvanassche@....org" <bvanassche@....org>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"jaegeuk@...nel.org" <jaegeuk@...nel.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"ming.lei@...hat.com" <ming.lei@...hat.com>,
"osandov@...com" <osandov@...com>,
"patchwork-bot@...nel.org" <patchwork-bot@...nel.org>,
"tj@...nel.org" <tj@...nel.org>,
"tom.leiming@...il.com" <tom.leiming@...il.com>,
"yi.zhang@...hat.com" <yi.zhang@...hat.com>
CC: "jisoo2146.oh@...sung.com" <jisoo2146.oh@...sung.com>,
"junho89.kim@...sung.com" <junho89.kim@...sung.com>,
"mj0123.lee@...sung.com" <mj0123.lee@...sung.com>,
"seunghwan.hyun@...sung.com" <seunghwan.hyun@...sung.com>,
"sookwan7.kim@...sung.com" <sookwan7.kim@...sung.com>,
"woosung2.lee@...sung.com" <woosung2.lee@...sung.com>,
"yt0928.kim@...sung.com" <yt0928.kim@...sung.com>
Subject: Re: [PATCH v11 1/3] bio: control bio max size
On 2021/06/02 21:29, Changheun Lee wrote:
> bio size can grow up to 4GB after muli-page bvec has been enabled.
> But sometimes large size of bio would lead to inefficient behaviors.
> Control of bio max size will be helpful to improve inefficiency.
>
> Below is a example for inefficient behaviours.
> In case of large chunk direct I/O, - 32MB chunk read in user space -
> all pages for 32MB would be merged to a bio structure if the pages
> physical addresses are contiguous. It makes some delay to submit
> until merge complete. bio max size should be limited to a proper size.
>
> When 32MB chunk read with direct I/O option is coming from userspace,
> kernel behavior is below now in do_direct_IO() loop. It's timeline.
>
> | bio merge for 32MB. total 8,192 pages are merged.
> | total elapsed time is over 2ms.
> |------------------ ... ----------------------->|
> | 8,192 pages merged a bio.
> | at this time, first bio submit is done.
> | 1 bio is split to 32 read request and issue.
> |--------------->
> |--------------->
> |--------------->
> ......
> |--------------->
> |--------------->|
> total 19ms elapsed to complete 32MB read done from device. |
>
> If bio max size is limited with 1MB, behavior is changed below.
>
> | bio merge for 1MB. 256 pages are merged for each bio.
> | total 32 bio will be made.
> | total elapsed time is over 2ms. it's same.
> | but, first bio submit timing is fast. about 100us.
> |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> | 256 pages merged a bio.
> | at this time, first bio submit is done.
> | and 1 read request is issued for 1 bio.
> |--------------->
> |--------------->
> |--------------->
> ......
> |--------------->
> |--------------->|
> total 17ms elapsed to complete 32MB read done from device. |
>
> As a result, read request issue timing is faster if bio max size is limited.
> Current kernel behavior with multipage bvec, super large bio can be created.
> And it lead to delay first I/O request issue.
>
> Signed-off-by: Changheun Lee <nanich.lee@...sung.com>
> ---
> block/bio.c | 17 ++++++++++++++---
> block/blk-settings.c | 19 +++++++++++++++++++
> include/linux/bio.h | 4 +++-
> include/linux/blkdev.h | 3 +++
> 4 files changed, 39 insertions(+), 4 deletions(-)
>
> diff --git a/block/bio.c b/block/bio.c
> index 44205dfb6b60..c52639bb80cd 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -255,6 +255,13 @@ void bio_init(struct bio *bio, struct bio_vec *table,
> }
> EXPORT_SYMBOL(bio_init);
>
> +unsigned int bio_max_size(struct bio *bio)
It would be nice to call this function the same as the limit name: bio_max_bytes().
> +{
> + struct block_device *bdev = bio->bi_bdev;
> +
> + return bdev ? bdev->bd_disk->queue->limits.max_bio_bytes : UINT_MAX;
My personal preference goes to a plain if() instead of this.
> +}
> +
> /**
> * bio_reset - reinitialize a bio
> * @bio: bio to reset
> @@ -866,7 +873,7 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
> struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
>
> if (page_is_mergeable(bv, page, len, off, same_page)) {
> - if (bio->bi_iter.bi_size > UINT_MAX - len) {
> + if (bio->bi_iter.bi_size > bio_max_size(bio) - len) {
> *same_page = false;
> return false;
> }
> @@ -995,6 +1002,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> {
> unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
> unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> + unsigned int bytes_left = bio_max_size(bio) - bio->bi_iter.bi_size;
> struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> struct page **pages = (struct page **)bv;
> bool same_page = false;
> @@ -1010,7 +1018,8 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
> pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
>
> - size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> + size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
> + &offset);
> if (unlikely(size <= 0))
> return size ? size : -EFAULT;
>
> @@ -1038,6 +1047,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
> {
> unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
> unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> + unsigned int bytes_left = bio_max_size(bio) - bio->bi_iter.bi_size;
> struct request_queue *q = bio->bi_bdev->bd_disk->queue;
> unsigned int max_append_sectors = queue_max_zone_append_sectors(q);
> struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> @@ -1058,7 +1068,8 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
> BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
> pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
>
> - size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> + size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
> + &offset);
> if (unlikely(size <= 0))
> return size ? size : -EFAULT;
>
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 902c40d67120..e270e31519a1 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -32,6 +32,7 @@ EXPORT_SYMBOL_GPL(blk_queue_rq_timeout);
> */
> void blk_set_default_limits(struct queue_limits *lim)
> {
> + lim->max_bio_bytes = UINT_MAX;
> lim->max_segments = BLK_MAX_SEGMENTS;
> lim->max_discard_segments = 1;
> lim->max_integrity_segments = 0;
What about the limit setup for stacked devices ? Leaving it to UINT_MAX is OK ?
If for your use case you add dm-linear on top of the device and rerun your test,
don't you get again slow performance ?
> @@ -100,6 +101,24 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
> }
> EXPORT_SYMBOL(blk_queue_bounce_limit);
>
> +/**
> + * blk_queue_max_bio_bytes - set bio max size for queue
> + * @q: the request queue for the device
> + * @bytes : bio max bytes to be set
> + *
> + * Description:
> + * Set proper bio max size to optimize queue operating.
> + **/
> +void blk_queue_max_bio_bytes(struct request_queue *q, unsigned int bytes)
> +{
> + struct queue_limits *limits = &q->limits;
> + unsigned int max_bio_bytes = round_up(bytes, PAGE_SIZE);
> +
> + limits->max_bio_bytes = max_t(unsigned int, max_bio_bytes,
> + BIO_MAX_VECS * PAGE_SIZE);
> +}
> +EXPORT_SYMBOL(blk_queue_max_bio_bytes);
Why does this need to be exported ?
> +
> /**
> * blk_queue_max_hw_sectors - set max sectors for a request for this queue
> * @q: the request queue for the device
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index a0b4cfdf62a4..f1a99f0a240c 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -106,6 +106,8 @@ static inline void *bio_data(struct bio *bio)
> return NULL;
> }
>
> +extern unsigned int bio_max_size(struct bio *bio);
No need for extern.
> +
> /**
> * bio_full - check if the bio is full
> * @bio: bio to check
> @@ -119,7 +121,7 @@ static inline bool bio_full(struct bio *bio, unsigned len)
> if (bio->bi_vcnt >= bio->bi_max_vecs)
> return true;
>
> - if (bio->bi_iter.bi_size > UINT_MAX - len)
> + if (bio->bi_iter.bi_size > bio_max_size(bio) - len)
> return true;
>
> return false;
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 1255823b2bc0..861888501fc0 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -326,6 +326,8 @@ enum blk_bounce {
> };
>
> struct queue_limits {
> + unsigned int max_bio_bytes;
> +
> enum blk_bounce bounce;
> unsigned long seg_boundary_mask;
> unsigned long virt_boundary_mask;
> @@ -1132,6 +1134,7 @@ extern void blk_abort_request(struct request *);
> * Access functions for manipulating queue properties
> */
> extern void blk_cleanup_queue(struct request_queue *);
> +extern void blk_queue_max_bio_bytes(struct request_queue *, unsigned int);
No need for extern.
> void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce limit);
> extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int);
> extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int);
>
--
Damien Le Moal
Western Digital Research
Powered by blists - more mailing lists