[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190708190809.l4fdhigexzdujvuv@US-160370MP2.local>
Date: Mon, 8 Jul 2019 12:08:09 -0700
From: Liu Bo <bo.liu@...ux.alibaba.com>
To: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Cc: linux-block@...r.kernel.org, Jens Axboe <axboe@...nel.dk>,
linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] blk-throttle: fix zero wait time for iops throttled group
On Mon, Jul 08, 2019 at 06:29:57PM +0300, Konstantin Khlebnikov wrote:
> After commit 991f61fe7e1d ("Blk-throttle: reduce tail io latency when iops
> limit is enforced") wait time could be zero even if group is throttled and
> cannot issue requests right now. As a result throtl_select_dispatch() turns
> into busy-loop under irq-safe queue spinlock.
>
> Fix is simple: always round up target time to the next throttle slice.
>
> Fixes: 991f61fe7e1d ("Blk-throttle: reduce tail io latency when iops limit is enforced")
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
> Cc: stable@...r.kernel.org # v4.19+
> ---
> block/blk-throttle.c | 9 +++------
> 1 file changed, 3 insertions(+), 6 deletions(-)
>
> diff --git a/block/blk-throttle.c b/block/blk-throttle.c
> index 9ea7c0ecad10..8ab6c8153223 100644
> --- a/block/blk-throttle.c
> +++ b/block/blk-throttle.c
> @@ -881,13 +881,10 @@ static bool tg_with_in_iops_limit(struct throtl_grp *tg, struct bio *bio,
> unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
> u64 tmp;
>
> - jiffy_elapsed = jiffy_elapsed_rnd = jiffies - tg->slice_start[rw];
> -
> - /* Slice has just started. Consider one slice interval */
> - if (!jiffy_elapsed)
> - jiffy_elapsed_rnd = tg->td->throtl_slice;
> + jiffy_elapsed = jiffies - tg->slice_start[rw];
>
> - jiffy_elapsed_rnd = roundup(jiffy_elapsed_rnd, tg->td->throtl_slice);
> + /* Round up to the next throttle slice, wait time must be nonzero */
> + jiffy_elapsed_rnd = roundup(jiffy_elapsed + 1, tg->td->throtl_slice);
>
> /*
> * jiffy_elapsed_rnd should not be a big value as minimum iops can be
Did you use a tiny iops limit to run into this?
thanks,
-liubo
Powered by blists - more mailing lists