[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ca251645-8d52-7a93-6ac2-579d97922a9e@huawei.com>
Date: Tue, 17 May 2022 11:12:28 +0800
From: "yukuai (C)" <yukuai3@...wei.com>
To: Tejun Heo <tj@...nel.org>,
Zhang Wensheng <zhangwensheng5@...wei.com>,
"ming.lei@...hat.com >> Ming Lei" <ming.lei@...hat.com>
CC: <axboe@...nel.dk>, <linux-block@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <cgroups@...r.kernel.org>
Subject: Re: [PATCH -next] block: fix io hung of setting throttle limit
frequently
在 2022/05/17 3:29, Tejun Heo 写道:
> On Mon, May 16, 2022 at 09:44:29AM +0800, Zhang Wensheng wrote:
>> diff --git a/block/blk-throttle.c b/block/blk-throttle.c
>> index 469c483719be..8acb205dfa85 100644
>> --- a/block/blk-throttle.c
>> +++ b/block/blk-throttle.c
>> @@ -1321,12 +1321,14 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global)
>> * that a group's limit are dropped suddenly and we don't want to
>> * account recently dispatched IO with new low rate.
>> */
>> - throtl_start_new_slice(tg, READ);
>> - throtl_start_new_slice(tg, WRITE);
>> + if (!timer_pending(&sq->parent_sq->pending_timer)) {
>> + throtl_start_new_slice(tg, READ);
>> + throtl_start_new_slice(tg, WRITE);
>>
>> - if (tg->flags & THROTL_TG_PENDING) {
>> - tg_update_disptime(tg);
>> - throtl_schedule_next_dispatch(sq->parent_sq, true);
>> + if (tg->flags & THROTL_TG_PENDING) {
>> + tg_update_disptime(tg);
>> + throtl_schedule_next_dispatch(sq->parent_sq, true);
>> + }
>
> Yeah, but this ends up breaking the reason why it's starting the new slices
> in the first place explained in the commit above, right? I'm not sure what
> the right solution is but this likely isn't it.
>
Hi, Tejun
Ming added a condition in tg_with_in_bps_limit():
- if (bps_limit == U64_MAX) {
+ /* no need to throttle if this bio's bytes have been accounted */
+ if (bps_limit == U64_MAX || bio_flagged(bio, BIO_THROTTLED)) {
Which will let the first throttled bio to be issued immediately once
the config if updated.
Do you think this behaviour is OK? If so, we can do the same for
tg_with_in_iops_limit.
Thanks,
Kuai
>
Powered by blists - more mailing lists