[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67425a7b-f9e1-d7a9-9ec8-158f9f8ce13e@huawei.com>
Date: Thu, 12 May 2022 09:30:16 +0800
From: "yukuai (C)" <yukuai3@...wei.com>
To: Jan Kara <jack@...e.cz>
CC: <paolo.valente@...aro.org>, <axboe@...nel.dk>,
<linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<yi.zhang@...wei.com>
Subject: Re: [PATCH -next 2/2] block, bfq: make bfq_has_work() more accurate
On 2022/05/11 22:08, Jan Kara wrote:
> On Tue 10-05-22 21:16:29, Yu Kuai wrote:
>> bfq_has_work() is using busy_queues currently, which is not accurate
>> because bfq_queue is busy doesn't represent that it has requests. Since
>> bfqd aready has a counter 'queued' to record how many requests are in
>> bfq, use it instead of busy_queues.
>>
>> Noted that bfq_has_work() can be called with 'bfqd->lock' held, thus the
>> lock can't be held in bfq_has_work() to protect 'bfqd->queued'.
>>
>> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
>
> So did you find this causing any real problem? Because bfq queue is
> accounted among busy queues once bfq_add_bfqq_busy() is called. And that
> happens once a new request is inserted into the queue so it should be very
> similar to bfqd->queued.
>
> Honza
Hi,
The related problem is described here:
https://lore.kernel.org/all/20220510112302.1215092-1-yukuai3@huawei.com/
The root cause of the panic is a linux-block problem, however, it can
be bypassed if bfq_has_work() is accurate. On the other hand,
unnecessary run_work will be triggered if bfqq stays busy:
__blk_mq_run_hw_queue
__blk_mq_sched_dispatch_requests
__blk_mq_do_dispatch_sched
if (!bfq_has_work())
break;
blk_mq_delay_run_hw_queues -> run again after 3ms
Thanks,
Kuai
>
>> ---
>> block/bfq-iosched.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
>> index 61750696e87f..1d2f8110c26b 100644
>> --- a/block/bfq-iosched.c
>> +++ b/block/bfq-iosched.c
>> @@ -5063,11 +5063,11 @@ static bool bfq_has_work(struct blk_mq_hw_ctx *hctx)
>> struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
>>
>> /*
>> - * Avoiding lock: a race on bfqd->busy_queues should cause at
>> + * Avoiding lock: a race on bfqd->queued should cause at
>> * most a call to dispatch for nothing
>> */
>> return !list_empty_careful(&bfqd->dispatch) ||
>> - bfq_tot_busy_queues(bfqd) > 0;
>> + READ_ONCE(bfqd->queued);
>> }
>>
>> static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
>> --
>> 2.31.1
>>
Powered by blists - more mailing lists