lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210604073459.29235-1-nanich.lee@samsung.com>
Date:   Fri,  4 Jun 2021 16:34:59 +0900
From:   Changheun Lee <nanich.lee@...sung.com>
To:     damien.lemoal@....com
Cc:     Avri.Altman@....com, Johannes.Thumshirn@....com,
        alex_y_xu@...oo.ca, alim.akhtar@...sung.com,
        asml.silence@...il.com, axboe@...nel.dk, bgoncalv@...hat.com,
        bvanassche@....org, cang@...eaurora.org,
        gregkh@...uxfoundation.org, hch@...radead.org, jaegeuk@...nel.org,
        jejb@...ux.ibm.com, jisoo2146.oh@...sung.com,
        junho89.kim@...sung.com, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
        martin.petersen@...cle.com, ming.lei@...hat.com,
        mj0123.lee@...sung.com, nanich.lee@...sung.com, osandov@...com,
        patchwork-bot@...nel.org, seunghwan.hyun@...sung.com,
        sookwan7.kim@...sung.com, tj@...nel.org, tom.leiming@...il.com,
        woosung2.lee@...sung.com, yi.zhang@...hat.com,
        yt0928.kim@...sung.com
Subject: Re: [PATCH v12 1/3] bio: control bio max size

> On 2021/06/04 14:22, Changheun Lee wrote:
> > bio size can grow up to 4GB after muli-page bvec has been enabled.
> > But sometimes large size of bio would lead to inefficient behaviors.
> > Control of bio max size will be helpful to improve inefficiency.
> > 
> > Below is a example for inefficient behaviours.
> > In case of large chunk direct I/O, - 32MB chunk read in user space -
> > all pages for 32MB would be merged to a bio structure if the pages
> > physical addresses are contiguous. It makes some delay to submit
> > until merge complete. bio max size should be limited to a proper size.
> > 
> > When 32MB chunk read with direct I/O option is coming from userspace,
> > kernel behavior is below now in do_direct_IO() loop. It's timeline.
> > 
> >  | bio merge for 32MB. total 8,192 pages are merged.
> >  | total elapsed time is over 2ms.
> >  |------------------ ... ----------------------->|
> >                                                  | 8,192 pages merged a bio.
> >                                                  | at this time, first bio submit is done.
> >                                                  | 1 bio is split to 32 read request and issue.
> >                                                  |--------------->
> >                                                   |--------------->
> >                                                    |--------------->
> >                                                               ......
> >                                                                    |--------------->
> >                                                                     |--------------->|
> >                           total 19ms elapsed to complete 32MB read done from device. |
> > 
> > If bio max size is limited with 1MB, behavior is changed below.
> > 
> >  | bio merge for 1MB. 256 pages are merged for each bio.
> >  | total 32 bio will be made.
> >  | total elapsed time is over 2ms. it's same.
> >  | but, first bio submit timing is fast. about 100us.
> >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> >       | 256 pages merged a bio.
> >       | at this time, first bio submit is done.
> >       | and 1 read request is issued for 1 bio.
> >       |--------------->
> >            |--------------->
> >                 |--------------->
> >                                       ......
> >                                                  |--------------->
> >                                                   |--------------->|
> >         total 17ms elapsed to complete 32MB read done from device. |
> > 
> > As a result, read request issue timing is faster if bio max size is limited.
> > Current kernel behavior with multipage bvec, super large bio can be created.
> > And it lead to delay first I/O request issue.
> > 
> > Signed-off-by: Changheun Lee <nanich.lee@...sung.com>
> > ---
> >  block/bio.c            | 17 ++++++++++++++---
> >  block/blk-settings.c   | 19 +++++++++++++++++++
> >  include/linux/bio.h    |  4 +++-
> >  include/linux/blkdev.h |  3 +++
> >  4 files changed, 39 insertions(+), 4 deletions(-)
> > 
> > diff --git a/block/bio.c b/block/bio.c
> > index 44205dfb6b60..73b673f1684e 100644
> > --- a/block/bio.c
> > +++ b/block/bio.c
> > @@ -255,6 +255,13 @@ void bio_init(struct bio *bio, struct bio_vec *table,
> >  }
> >  EXPORT_SYMBOL(bio_init);
> >  
> > +unsigned int bio_max_bytes(struct bio *bio)
> > +{
> > +	struct block_device *bdev = bio->bi_bdev;
> > +
> > +	return bdev ? bdev->bd_disk->queue->limits.max_bio_bytes : UINT_MAX;
> > +}
> 
> unsigned int bio_max_bytes(struct bio *bio)
> {
> 	struct block_device *bdev = bio->bi_bdev;
> 
> 	if (!bdev)
> 		return UINT_MAX;
> 	return bdev->bd_disk->queue->limits.max_bio_bytes;
> }
> 
> is a lot more readable...
> Also, I remember there was some problems with bd_disk possibly being null. Was
> that fixed ?

Thank you for review. But I'd like current style, and it's readable enough
now I think. Null of bd_disk was just suspicion. bd_disk is not null if bdev
is not null.

> 
> > +
> >  /**
> >   * bio_reset - reinitialize a bio
> >   * @bio:	bio to reset
> > @@ -866,7 +873,7 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
> >  		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
> >  
> >  		if (page_is_mergeable(bv, page, len, off, same_page)) {
> > -			if (bio->bi_iter.bi_size > UINT_MAX - len) {
> > +			if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len) {
> >  				*same_page = false;
> >  				return false;
> >  			}
> > @@ -995,6 +1002,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> >  {
> >  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
> >  	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> > +	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
> >  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> >  	struct page **pages = (struct page **)bv;
> >  	bool same_page = false;
> > @@ -1010,7 +1018,8 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> >  	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
> >  	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
> >  
> > -	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> > +	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
> > +				  &offset);
> >  	if (unlikely(size <= 0))
> >  		return size ? size : -EFAULT;
> >  
> > @@ -1038,6 +1047,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
> >  {
> >  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
> >  	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> > +	unsigned int bytes_left = bio_max_bytes(bio) - bio->bi_iter.bi_size;
> >  	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
> >  	unsigned int max_append_sectors = queue_max_zone_append_sectors(q);
> >  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> > @@ -1058,7 +1068,8 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
> >  	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
> >  	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
> >  
> > -	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> > +	size = iov_iter_get_pages(iter, pages, bytes_left, nr_pages,
> > +				  &offset);
> >  	if (unlikely(size <= 0))
> >  		return size ? size : -EFAULT;
> >  
> > diff --git a/block/blk-settings.c b/block/blk-settings.c
> > index 902c40d67120..e270e31519a1 100644
> > --- a/block/blk-settings.c
> > +++ b/block/blk-settings.c
> > @@ -32,6 +32,7 @@ EXPORT_SYMBOL_GPL(blk_queue_rq_timeout);
> >   */
> >  void blk_set_default_limits(struct queue_limits *lim)
> >  {
> > +	lim->max_bio_bytes = UINT_MAX;
> >  	lim->max_segments = BLK_MAX_SEGMENTS;
> >  	lim->max_discard_segments = 1;
> >  	lim->max_integrity_segments = 0;
> > @@ -100,6 +101,24 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
> >  }
> >  EXPORT_SYMBOL(blk_queue_bounce_limit);
> >  
> > +/**
> > + * blk_queue_max_bio_bytes - set bio max size for queue
> 
> blk_queue_max_bio_bytes - set max_bio_bytes queue limit
> 
> And then you can drop the not very useful description.

OK. I'll.

> 
> > + * @q: the request queue for the device
> > + * @bytes : bio max bytes to be set
> > + *
> > + * Description:
> > + *    Set proper bio max size to optimize queue operating.
> > + **/
> > +void blk_queue_max_bio_bytes(struct request_queue *q, unsigned int bytes)
> > +{
> > +	struct queue_limits *limits = &q->limits;
> > +	unsigned int max_bio_bytes = round_up(bytes, PAGE_SIZE);
> > +
> > +	limits->max_bio_bytes = max_t(unsigned int, max_bio_bytes,
> > +				      BIO_MAX_VECS * PAGE_SIZE);
> > +}
> > +EXPORT_SYMBOL(blk_queue_max_bio_bytes);
> 
> Setting of the stacked limits is still missing.

max_bio_bytes for stacked device is just default(UINT_MAX) in this patch.
Because blk_set_stacking_limits() call blk_set_default_limits().
I'll work continue for stacked device after this patchowork.

> 
> > +
> >  /**
> >   * blk_queue_max_hw_sectors - set max sectors for a request for this queue
> >   * @q:  the request queue for the device
> > diff --git a/include/linux/bio.h b/include/linux/bio.h
> > index a0b4cfdf62a4..3959cc1a0652 100644
> > --- a/include/linux/bio.h
> > +++ b/include/linux/bio.h
> > @@ -106,6 +106,8 @@ static inline void *bio_data(struct bio *bio)
> >  	return NULL;
> >  }
> >  
> > +extern unsigned int bio_max_bytes(struct bio *bio);
> > +
> >  /**
> >   * bio_full - check if the bio is full
> >   * @bio:	bio to check
> > @@ -119,7 +121,7 @@ static inline bool bio_full(struct bio *bio, unsigned len)
> >  	if (bio->bi_vcnt >= bio->bi_max_vecs)
> >  		return true;
> >  
> > -	if (bio->bi_iter.bi_size > UINT_MAX - len)
> > +	if (bio->bi_iter.bi_size > bio_max_bytes(bio) - len)
> >  		return true;
> >  
> >  	return false;
> > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> > index 1255823b2bc0..861888501fc0 100644
> > --- a/include/linux/blkdev.h
> > +++ b/include/linux/blkdev.h
> > @@ -326,6 +326,8 @@ enum blk_bounce {
> >  };
> >  
> >  struct queue_limits {
> > +	unsigned int		max_bio_bytes;
> > +
> >  	enum blk_bounce		bounce;
> >  	unsigned long		seg_boundary_mask;
> >  	unsigned long		virt_boundary_mask;
> > @@ -1132,6 +1134,7 @@ extern void blk_abort_request(struct request *);
> >   * Access functions for manipulating queue properties
> >   */
> >  extern void blk_cleanup_queue(struct request_queue *);
> > +extern void blk_queue_max_bio_bytes(struct request_queue *, unsigned int);
> >  void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce limit);
> >  extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int);
> >  extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int);
> > 
> 
> 
> -- 
> Damien Le Moal
> Western Digital Research

Thank you,
Changheun Lee

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ