[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <19568a6b-66a8-bb93-7c8c-3b523972535a@huaweicloud.com>
Date: Fri, 28 Jun 2024 11:34:32 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Yu Kuai <yukuai1@...weicloud.com>, tj@...nel.org, josef@...icpanda.com,
axboe@...nel.dk
Cc: cgroups@...r.kernel.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, yi.zhang@...wei.com, yangerkun@...wei.com,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH v2] blk-throttle: fix lower control under super low iops
limit
Hi,
在 2024/06/18 14:21, Yu Kuai 写道:
> From: Yu Kuai <yukuai3@...wei.com>
>
> User will configure allowed iops limit in 1s, and calculate_io_allowed()
> will calculate allowed iops in the slice by:
>
> limit * HZ / throtl_slice
>
> However, if limit is quite low, the result can be 0, then
> allowed IO in the slice is 0, this will cause missing dispatch and
> control will be lower than limit.
>
> For example, set iops_limit to 5 with HD disk, and test will found that
> iops will be 3.
>
> This is usually not a big deal, because user will unlikely to configure
> such low iops limit, however, this is still a problem in the extreme
> scene.
>
> Fix the problem by making sure the wait time calculated by
> tg_within_iops_limit() should allow at least one IO to be dispatched.
Friendly ping ...
>
> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> ---
> Changes in v2:
> - instead of extend thorlt_slice, extend wait time;
> block/blk-throttle.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/block/blk-throttle.c b/block/blk-throttle.c
> index c1bf73f8c75d..dc6140fa3de0 100644
> --- a/block/blk-throttle.c
> +++ b/block/blk-throttle.c
> @@ -704,6 +704,9 @@ static unsigned long tg_within_iops_limit(struct throtl_grp *tg, struct bio *bio
>
> /* Calc approx time to dispatch */
> jiffy_wait = jiffy_elapsed_rnd - jiffy_elapsed;
> +
> + /* make sure at least one io can be dispatched after waiting */
> + jiffy_wait = max(jiffy_wait, HZ / iops_limit + 1);
> return jiffy_wait;
> }
>
>
Powered by blists - more mailing lists