[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5fc124c9-e202-99ca-418d-0f52d027640f@huaweicloud.com>
Date: Mon, 10 Mar 2025 09:58:57 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Tejun Heo <tj@...nel.org>, Yu Kuai <yukuai1@...weicloud.com>
Cc: ming.lei@...hat.com, axboe@...nel.dk, josef@...icpanda.com,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, yi.zhang@...wei.com, yangerkun@...wei.com,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH] blk-throttle: support io merge over iops_limit
Hi,
在 2025/03/08 0:07, Tejun Heo 写道:
> Hello,
>
> On Fri, Mar 07, 2025 at 05:01:52PM +0800, Yu Kuai wrote:
>> From: Yu Kuai <yukuai3@...wei.com>
>>
>> Commit 9f5ede3c01f9 ("block: throttle split bio in case of iops limit")
>> support to account split IO for iops limit, because block layer provides
>> io accounting against split bio.
>>
>> However, io merge is still not handled, while block layer doesn't
>> account merged io for iops. Fix this problem by decreasing io_disp
>> if bio is merged, and following IO can use the extra budget. If io merge
>> concurrent with iops throttling, it's not handled if one more or one
>> less bio is dispatched, this is fine because as long as new slice is not
>> started, blk-throttle already preserve one extra slice for deviation,
>> and it's not worth it to handle the case that iops_limit rate is less than
>> one per slice.
>>
>> A regression test will be added for this case [1], before this patch,
>> the test will fail:
>>
>> +++ /root/blktests-mainline/results/nodev/throtl/007.out.bad
>> @@ -1,4 +1,4 @@
>> Running throtl/007
>> 1
>> -1
>> +11
>> Test complete
>>
>> [1] https://lore.kernel.org/all/20250307080318.3860858-2-yukuai1@huaweicloud.com/
>
> For blk-throtl, iops limit has meant the number of bios issued. I'm not
Yes, but it's a litter hard to explain to users the differece between IO
split and IO merge, they just think IO split is the numer of IOs issued
to disk, and IO merge is the number of IOs issued from user.
> necessarily against this change but this is significantly changing what a
> given configuration means. Also, if we're now doing hardware request based
> throttling, maybe we should just move this under rq-qos. That has the
> problem of not supporting bio-based drivers but maybe we can leave
> blk-throtl in deprecation mode and slowly phase it out.
Yes, we discussed this before.
And there is another angle that might convince you for the patch, if the
user workload triggers a lot of IO merge, and iops limit is above the
actual iops on disk, then before this patch, blk-throttle will make IO
merge impossible and resulting in performance degradation.
>
> Also, can you please make atomic_t conversion a separate patch and describe
> why that's being done?
Of course, the reason is that new helper will decrease the counter
outside lock.
Thanks,
Kuai
>
> Thanks.
>
Powered by blists - more mailing lists