[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <43pgbt3ugu4mwaeh2ai537vkejqott2pez3qkxvuf4utq5ibgq@q4ncjkea4kwy>
Date: Thu, 21 Aug 2025 16:06:11 +0800
From: Coly Li <colyli@...nel.org>
To: Yu Kuai <yukuai1@...weicloud.com>
Cc: axboe@...nel.dk, neil@...wn.name, akpm@...ux-foundation.org,
linux-block@...r.kernel.org, linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org,
xni@...hat.com, yukuai3@...wei.com, yi.zhang@...wei.com, yangerkun@...wei.com,
johnny.chenyi@...wei.com
Subject: Re: [PATCH] block: fix disordered IO in the case recursive split
On Thu, Aug 21, 2025 at 03:47:06PM +0800, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@...wei.com>
>
> Currently, split bio will be chained to original bio, and original bio
> will be resubmitted to the tail of current->bio_list, waiting for
> split bio to be issued. However, if split bio get split again, the IO
> order will be messed up, for example, in raid456 IO will first be
> split by max_sector from md_submit_bio(), and then later be split
> again by chunksize for internal handling:
>
> For example, assume max_sectors is 1M, and chunksize is 512k
>
> 1) issue a 2M IO:
>
> bio issuing: 0+2M
> current->bio_list: NULL
>
> 2) md_submit_bio() split by max_sector:
>
> bio issuing: 0+1M
> current->bio_list: 1M+1M
>
> 3) chunk_aligned_read() split by chunksize:
>
> bio issuing: 0+512k
> current->bio_list: 1M+1M -> 512k+512k
>
> 4) after first bio issued, __submit_bio_noacct() will contuine issuing
> next bio:
>
> bio issuing: 1M+1M
> current->bio_list: 512k+512k
> bio issued: 0+512k
>
> 5) chunk_aligned_read() split by chunksize:
>
> bio issuing: 1M+512k
> current->bio_list: 512k+512k -> 1536k+512k
> bio issued: 0+512k
>
> 6) no split afterwards, finally the issue order is:
>
> 0+512k -> 1M+512k -> 512k+512k -> 1536k+512k
>
> This behaviour will cause large IO read on raid456 endup to be small
> discontinuous IO in underlying disks. Fix this problem by placing chanied
> bio to the head of current->bio_list.
>
> Test script: test on 8 disk raid5 with 64k chunksize
> dd if=/dev/md0 of=/dev/null bs=4480k iflag=direct
>
> Test results:
> Before this patch
> 1) iostat results:
> Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz aqu-sz %util
> md0 52430.00 3276.87 0.00 0.00 0.62 64.00 32.60 80.10
> sd* 4487.00 409.00 2054.00 31.40 0.82 93.34 3.68 71.20
> 2) blktrace G stage:
> 8,0 0 486445 11.357392936 843 G R 14071424 + 128 [dd]
> 8,0 0 486451 11.357466360 843 G R 14071168 + 128 [dd]
> 8,0 0 486454 11.357515868 843 G R 14071296 + 128 [dd]
> 8,0 0 486468 11.357968099 843 G R 14072192 + 128 [dd]
> 8,0 0 486474 11.358031320 843 G R 14071936 + 128 [dd]
> 8,0 0 486480 11.358096298 843 G R 14071552 + 128 [dd]
> 8,0 0 486490 11.358303858 843 G R 14071808 + 128 [dd]
> 3) io seek for sdx:
> Noted io seek is the result from blktrace D stage, statistic of:
> ABS((offset of next IO) - (offset + len of previous IO))
>
> Read|Write seek
> cnt 55175, zero cnt 25079
> >=(KB) .. <(KB) : count ratio |distribution |
> 0 .. 1 : 25079 45.5% |########################################|
> 1 .. 2 : 0 0.0% | |
> 2 .. 4 : 0 0.0% | |
> 4 .. 8 : 0 0.0% | |
> 8 .. 16 : 0 0.0% | |
> 16 .. 32 : 0 0.0% | |
> 32 .. 64 : 12540 22.7% |##################### |
> 64 .. 128 : 2508 4.5% |##### |
> 128 .. 256 : 0 0.0% | |
> 256 .. 512 : 10032 18.2% |################# |
> 512 .. 1024 : 5016 9.1% |######### |
>
> After this patch:
> 1) iostat results:
> Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz aqu-sz %util
> md0 87965.00 5271.88 0.00 0.00 0.16 61.37 14.03 90.60
> sd* 6020.00 658.44 5117.00 45.95 0.44 112.00 2.68 86.50
> 2) blktrace G stage:
> 8,0 0 206296 5.354894072 664 G R 7156992 + 128 [dd]
> 8,0 0 206305 5.355018179 664 G R 7157248 + 128 [dd]
> 8,0 0 206316 5.355204438 664 G R 7157504 + 128 [dd]
> 8,0 0 206319 5.355241048 664 G R 7157760 + 128 [dd]
> 8,0 0 206333 5.355500923 664 G R 7158016 + 128 [dd]
> 8,0 0 206344 5.355837806 664 G R 7158272 + 128 [dd]
> 8,0 0 206353 5.355960395 664 G R 7158528 + 128 [dd]
> 8,0 0 206357 5.356020772 664 G R 7158784 + 128 [dd]
> 2) io seek for sdx
> Read|Write seek
> cnt 28644, zero cnt 21483
> >=(KB) .. <(KB) : count ratio |distribution |
> 0 .. 1 : 21483 75.0% |########################################|
> 1 .. 2 : 0 0.0% | |
> 2 .. 4 : 0 0.0% | |
> 4 .. 8 : 0 0.0% | |
> 8 .. 16 : 0 0.0% | |
> 16 .. 32 : 0 0.0% | |
> 32 .. 64 : 7161 25.0% |############## |
>
> BTW, this looks like a long term problem from day one, and large
> sequential IO read is pretty common case like video playing.
>
> And even with this patch, in this test case IO is merged to at most 128k
> is due to block layer plug limit BLK_PLUG_FLUSH_SIZE, increase such
> limit and cat get even better performance. However, we'll figure out
> how to do this properly later.
>
> Fixes: d89d87965dcb ("When stacked block devices are in-use (e.g. md or dm), the recursive calls")
> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
Tested-by: Coly Li <colyli@...nel.org>
Yeah, I can see 'dd if=./on-fs-file of=/dev/null bs=6400K status=proress' to report
the read throughput from around 1.2GiB/s to 9.8GiB/s.
Great job and thank you!
Coly Li
> ---
> block/blk-core.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 4201504158a1..0d46d10edb22 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -745,12 +745,16 @@ void submit_bio_noacct_nocheck(struct bio *bio)
> * to collect a list of requests submited by a ->submit_bio method while
> * it is active, and then process them after it returned.
> */
> - if (current->bio_list)
> - bio_list_add(¤t->bio_list[0], bio);
> - else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO))
> + if (current->bio_list) {
> + if (bio_flagged(bio, BIO_CHAIN))
> + bio_list_add_head(¤t->bio_list[0], bio);
> + else
> + bio_list_add(¤t->bio_list[0], bio);
> + } else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) {
> __submit_bio_noacct_mq(bio);
> - else
> + } else {
> __submit_bio_noacct(bio);
> + }
> }
>
> static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q,
> --
> 2.39.2
>
>
Powered by blists - more mailing lists