[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1929CA74-3770-4B5D-B0A5-759911E97815@linux.dev>
Date: Tue, 27 Aug 2024 15:24:43 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Ming Lei <ming.lei@...hat.com>
Cc: Muchun Song <songmuchun@...edance.com>,
Jens Axboe <axboe@...nel.dk>,
"open list:BLOCK LAYER" <linux-block@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/4] block: fix fix ordering between checking
QUEUE_FLAG_QUIESCED and adding requests to hctx->dispatch
> On Aug 26, 2024, at 17:20, Ming Lei <ming.lei@...hat.com> wrote:
>
> On Mon, Aug 26, 2024 at 03:33:18PM +0800, Muchun Song wrote:
>>
>>
>>> On Aug 26, 2024, at 15:06, Muchun Song <songmuchun@...edance.com> wrote:
>>>
>>> On Fri, Aug 23, 2024 at 7:28 PM Ming Lei <ming.lei@...hat.com> wrote:
>>>>
>>>> On Sun, Aug 11, 2024 at 06:19:21 PM +0800, Muchun Song wrote:
>>>>> Supposing the following scenario.
>>>>>
>>>>> CPU0 CPU1
>>>>>
>>>>> blk_mq_request_issue_directly() blk_mq_unquiesce_queue()
>>>>> if (blk_queue_quiesced()) blk_queue_flag_clear(QUEUE_FLAG_QUIESCED) 3) store
>>>>> blk_mq_insert_request() blk_mq_run_hw_queues()
>>>>> /* blk_mq_run_hw_queue()
>>>>> * Add request to dispatch list or set bitmap of if (!blk_mq_hctx_has_pending()) 4) load
>>>>> * software queue. 1) store return
>>>>> */
>>>>> blk_mq_run_hw_queue()
>>>>> if (blk_queue_quiesced()) 2) load
>>>>> return
>>>>> blk_mq_sched_dispatch_requests()
>>>>>
>>>>> The full memory barrier should be inserted between 1) and 2), as well as
>>>>> between 3) and 4) to make sure that either CPU0 sees QUEUE_FLAG_QUIESCED is
>>>>> cleared or CPU1 sees dispatch list or setting of bitmap of software queue.
>>>>> Otherwise, either CPU will not re-run the hardware queue causing starvation.
>>>>
>>>> Memory barrier shouldn't serve as bug fix for two slow code paths.
>>>>
>>>> One simple fix is to add helper of blk_queue_quiesced_lock(), and
>>>> call the following check on CPU0:
>>>>
>>>> if (blk_queue_quiesced_lock())
>>>> blk_mq_run_hw_queue();
>>>
>>> This only fixes blk_mq_request_issue_directly(), I think anywhere that
>>> matching this
>>> pattern (inserting a request to dispatch list and then running the
>>> hardware queue)
>>> should be fixed. And I think there are many places which match this
>>> pattern (E.g.
>>> blk_mq_submit_bio()). The above graph should be adjusted to the following.
>>>
>>> CPU0 CPU1
>>>
>>> blk_mq_insert_request() 1) store blk_mq_unquiesce_queue()
>>> blk_mq_run_hw_queue()
>>> blk_queue_flag_clear(QUEUE_FLAG_QUIESCED) 3) store
>>> if (blk_queue_quiesced()) 2) load blk_mq_run_hw_queues()
>>> return blk_mq_run_hw_queue()
>>> blk_mq_sched_dispatch_requests() if
>>> (!blk_mq_hctx_has_pending()) 4) load
>>> return
>>
>> Sorry. There is something wrong with my email client. Resend the graph.
>>
>> CPU0 CPU1
>>
>> blk_mq_insert_request() 1) store blk_mq_unquiesce_queue()
>> blk_mq_run_hw_queue() blk_queue_flag_clear(QUEUE_FLAG_QUIESCED) 3) store
>> if (blk_queue_quiesced()) 2) load blk_mq_run_hw_queues()
>> return blk_mq_run_hw_queue()
>> blk_mq_sched_dispatch_requests() if (!blk_mq_hctx_has_pending()) 4) load
>> return
>
> OK.
>
> The issue shouldn't exist if blk_queue_quiesced() return false in
> blk_mq_run_hw_queue(), so it is still one race in two slow paths?
>
> I guess the barrier-less approach should work too, such as:
>
If we prefer barrier-less approach, I think the following solution
will work as well, I'll use it in v2. Thanks.
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index e3c3c0c21b55..632261982a77 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2202,6 +2202,12 @@ void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
> }
> EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
>
> +static inline bool blk_mq_hw_queue_need_run(struct blk_mq_hw_ctx *hctx)
> +{
> + return !blk_queue_quiesced(hctx->queue) &&
> + blk_mq_hctx_has_pending(hctx);
> +}
> +
> /**
> * blk_mq_run_hw_queue - Start to run a hardware queue.
> * @hctx: Pointer to the hardware queue to run.
> @@ -2231,11 +2237,19 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
> * quiesced.
> */
> __blk_mq_run_dispatch_ops(hctx->queue, false,
> - need_run = !blk_queue_quiesced(hctx->queue) &&
> - blk_mq_hctx_has_pending(hctx));
> + need_run = blk_mq_hw_queue_need_run(hctx));
>
> - if (!need_run)
> - return;
> + if (!need_run) {
> + unsigned long flags;
> +
> + /* sync with unquiesce */
> + spin_lock_irqsave(&hctx->queue->queue_lock, flags);
> + need_run = blk_mq_hw_queue_need_run(hctx);
One question here: should we use __blk_mq_run_dispatch_ops()? I saw a comment above.
It seems it is safe to call blk_mq_hw_queue_need_run under [s]rcu lock.
Thanks.
> + spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
> +
> + if (!need_run)
> + return;
> + }
>
> if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
> blk_mq_delay_run_hw_queue(hctx, 0);
>
>
> thanks,
> Ming
Powered by blists - more mailing lists