[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZuEUiScRwuXgIrC0@fedora>
Date: Wed, 11 Sep 2024 11:54:49 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: Muchun Song <songmuchun@...edance.com>, yukuai1@...weicloud.com,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
muchun.song@...ux.dev, stable@...r.kernel.org, ming.lei@...hat.com
Subject: Re: [PATCH v2 2/3] block: fix ordering between checking
QUEUE_FLAG_QUIESCED and adding requests
On Tue, Sep 10, 2024 at 07:22:16AM -0600, Jens Axboe wrote:
> On 9/3/24 2:16 AM, Muchun Song wrote:
> > Supposing the following scenario.
> >
> > CPU0 CPU1
> >
> > blk_mq_insert_request() 1) store blk_mq_unquiesce_queue()
> > blk_mq_run_hw_queue() blk_queue_flag_clear(QUEUE_FLAG_QUIESCED) 3) store
> > if (blk_queue_quiesced()) 2) load blk_mq_run_hw_queues()
> > return blk_mq_run_hw_queue()
> > blk_mq_sched_dispatch_requests() if (!blk_mq_hctx_has_pending()) 4) load
> > return
> >
> > The full memory barrier should be inserted between 1) and 2), as well as
> > between 3) and 4) to make sure that either CPU0 sees QUEUE_FLAG_QUIESCED is
> > cleared or CPU1 sees dispatch list or setting of bitmap of software queue.
> > Otherwise, either CPU will not re-run the hardware queue causing starvation.
> >
> > So the first solution is to 1) add a pair of memory barrier to fix the
> > problem, another solution is to 2) use hctx->queue->queue_lock to synchronize
> > QUEUE_FLAG_QUIESCED. Here, we chose 2) to fix it since memory barrier is not
> > easy to be maintained.
>
> Same comment here, 72-74 chars wide please.
>
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index b2d0f22de0c7f..ac39f2a346a52 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -2202,6 +2202,24 @@ void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
> > }
> > EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
> >
> > +static inline bool blk_mq_hw_queue_need_run(struct blk_mq_hw_ctx *hctx)
> > +{
> > + bool need_run;
> > +
> > + /*
> > + * When queue is quiesced, we may be switching io scheduler, or
> > + * updating nr_hw_queues, or other things, and we can't run queue
> > + * any more, even blk_mq_hctx_has_pending() can't be called safely.
> > + *
> > + * And queue will be rerun in blk_mq_unquiesce_queue() if it is
> > + * quiesced.
> > + */
> > + __blk_mq_run_dispatch_ops(hctx->queue, false,
> > + need_run = !blk_queue_quiesced(hctx->queue) &&
> > + blk_mq_hctx_has_pending(hctx));
> > + return need_run;
> > +}
>
> This __blk_mq_run_dispatch_ops() is also way too wide, why didn't you
> just break it like where you copied it from?
>
> > +
> > /**
> > * blk_mq_run_hw_queue - Start to run a hardware queue.
> > * @hctx: Pointer to the hardware queue to run.
> > @@ -2222,20 +2240,23 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
> >
> > might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
> >
> > - /*
> > - * When queue is quiesced, we may be switching io scheduler, or
> > - * updating nr_hw_queues, or other things, and we can't run queue
> > - * any more, even __blk_mq_hctx_has_pending() can't be called safely.
> > - *
> > - * And queue will be rerun in blk_mq_unquiesce_queue() if it is
> > - * quiesced.
> > - */
> > - __blk_mq_run_dispatch_ops(hctx->queue, false,
> > - need_run = !blk_queue_quiesced(hctx->queue) &&
> > - blk_mq_hctx_has_pending(hctx));
> > + need_run = blk_mq_hw_queue_need_run(hctx);
> > + if (!need_run) {
> > + unsigned long flags;
> >
> > - if (!need_run)
> > - return;
> > + /*
> > + * synchronize with blk_mq_unquiesce_queue(), becuase we check
> > + * if hw queue is quiesced locklessly above, we need the use
> > + * ->queue_lock to make sure we see the up-to-date status to
> > + * not miss rerunning the hw queue.
> > + */
> > + spin_lock_irqsave(&hctx->queue->queue_lock, flags);
> > + need_run = blk_mq_hw_queue_need_run(hctx);
> > + spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
> > +
> > + if (!need_run)
> > + return;
> > + }
>
> Is this not solvable on the unquiesce side instead? It's rather a shame
> to add overhead to the fast path to avoid a race with something that's
> super unlikely, like quisce.
Yeah, it can be solved by adding synchronize_rcu()/srcu() in unquiesce
side, but SCSI may call it in non-sleepable context via scsi_internal_device_unblock_nowait().
Thanks,
Ming
Powered by blists - more mailing lists