[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFj5m9KmQZM8+nMORaQOJC-tzM0AOMz+kUyDa1r8V0NTcOVXtg@mail.gmail.com>
Date: Thu, 29 Aug 2024 10:51:36 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Muchun Song <muchun.song@...ux.dev>
Cc: Muchun Song <songmuchun@...edance.com>, Jens Axboe <axboe@...nel.dk>,
"open list:BLOCK LAYER" <linux-block@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Ming Lei <ming.lei@...hat.com>
Subject: Re: [PATCH 4/4] block: fix fix ordering between checking
QUEUE_FLAG_QUIESCED and adding requests to hctx->dispatch
On Tue, Aug 27, 2024 at 4:17 PM Muchun Song <muchun.song@...ux.dev> wrote:
>
>
>
> > On Aug 27, 2024, at 15:24, Muchun Song <muchun.song@...ux.dev> wrote:
> >
> >
> >
> >> On Aug 26, 2024, at 17:20, Ming Lei <ming.lei@...hat.com> wrote:
> >>
> >> On Mon, Aug 26, 2024 at 03:33:18PM +0800, Muchun Song wrote:
> >>>
> >>>
> >>>> On Aug 26, 2024, at 15:06, Muchun Song <songmuchun@...edance.com> wrote:
> >>>>
> >>>> On Fri, Aug 23, 2024 at 7:28 PM Ming Lei <ming.lei@...hat.com> wrote:
> >>>>>
> >>>>> On Sun, Aug 11, 2024 at 06:19:21 PM +0800, Muchun Song wrote:
> >>>>>> Supposing the following scenario.
> >>>>>>
> >>>>>> CPU0 CPU1
> >>>>>>
> >>>>>> blk_mq_request_issue_directly() blk_mq_unquiesce_queue()
> >>>>>> if (blk_queue_quiesced()) blk_queue_flag_clear(QUEUE_FLAG_QUIESCED) 3) store
> >>>>>> blk_mq_insert_request() blk_mq_run_hw_queues()
> >>>>>> /* blk_mq_run_hw_queue()
> >>>>>> * Add request to dispatch list or set bitmap of if (!blk_mq_hctx_has_pending()) 4) load
> >>>>>> * software queue. 1) store return
> >>>>>> */
> >>>>>> blk_mq_run_hw_queue()
> >>>>>> if (blk_queue_quiesced()) 2) load
> >>>>>> return
> >>>>>> blk_mq_sched_dispatch_requests()
> >>>>>>
> >>>>>> The full memory barrier should be inserted between 1) and 2), as well as
> >>>>>> between 3) and 4) to make sure that either CPU0 sees QUEUE_FLAG_QUIESCED is
> >>>>>> cleared or CPU1 sees dispatch list or setting of bitmap of software queue.
> >>>>>> Otherwise, either CPU will not re-run the hardware queue causing starvation.
> >>>>>
> >>>>> Memory barrier shouldn't serve as bug fix for two slow code paths.
> >>>>>
> >>>>> One simple fix is to add helper of blk_queue_quiesced_lock(), and
> >>>>> call the following check on CPU0:
> >>>>>
> >>>>> if (blk_queue_quiesced_lock())
> >>>>> blk_mq_run_hw_queue();
> >>>>
> >>>> This only fixes blk_mq_request_issue_directly(), I think anywhere that
> >>>> matching this
> >>>> pattern (inserting a request to dispatch list and then running the
> >>>> hardware queue)
> >>>> should be fixed. And I think there are many places which match this
> >>>> pattern (E.g.
> >>>> blk_mq_submit_bio()). The above graph should be adjusted to the following.
> >>>>
> >>>> CPU0 CPU1
> >>>>
> >>>> blk_mq_insert_request() 1) store blk_mq_unquiesce_queue()
> >>>> blk_mq_run_hw_queue()
> >>>> blk_queue_flag_clear(QUEUE_FLAG_QUIESCED) 3) store
> >>>> if (blk_queue_quiesced()) 2) load blk_mq_run_hw_queues()
> >>>> return blk_mq_run_hw_queue()
> >>>> blk_mq_sched_dispatch_requests() if
> >>>> (!blk_mq_hctx_has_pending()) 4) load
> >>>> return
> >>>
> >>> Sorry. There is something wrong with my email client. Resend the graph.
> >>>
> >>> CPU0 CPU1
> >>>
> >>> blk_mq_insert_request() 1) store blk_mq_unquiesce_queue()
> >>> blk_mq_run_hw_queue() blk_queue_flag_clear(QUEUE_FLAG_QUIESCED) 3) store
> >>> if (blk_queue_quiesced()) 2) load blk_mq_run_hw_queues()
> >>> return blk_mq_run_hw_queue()
> >>> blk_mq_sched_dispatch_requests() if (!blk_mq_hctx_has_pending()) 4) load
> >>> return
> >>
> >> OK.
> >>
> >> The issue shouldn't exist if blk_queue_quiesced() return false in
> >> blk_mq_run_hw_queue(), so it is still one race in two slow paths?
> >>
> >> I guess the barrier-less approach should work too, such as:
> >>
> >
> > If we prefer barrier-less approach, I think the following solution
> > will work as well, I'll use it in v2. Thanks.
> >
> >>
> >> diff --git a/block/blk-mq.c b/block/blk-mq.c
> >> index e3c3c0c21b55..632261982a77 100644
> >> --- a/block/blk-mq.c
> >> +++ b/block/blk-mq.c
> >> @@ -2202,6 +2202,12 @@ void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
> >> }
> >> EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
> >>
> >> +static inline bool blk_mq_hw_queue_need_run(struct blk_mq_hw_ctx *hctx)
> >> +{
> >> + return !blk_queue_quiesced(hctx->queue) &&
> >> + blk_mq_hctx_has_pending(hctx);
> >> +}
> >> +
> >> /**
> >> * blk_mq_run_hw_queue - Start to run a hardware queue.
> >> * @hctx: Pointer to the hardware queue to run.
> >> @@ -2231,11 +2237,19 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
> >> * quiesced.
> >> */
> >> __blk_mq_run_dispatch_ops(hctx->queue, false,
> >> - need_run = !blk_queue_quiesced(hctx->queue) &&
> >> - blk_mq_hctx_has_pending(hctx));
> >> + need_run = blk_mq_hw_queue_need_run(hctx));
> >>
> >> - if (!need_run)
> >> - return;
> >> + if (!need_run) {
> >> + unsigned long flags;
> >> +
> >> + /* sync with unquiesce */
> >> + spin_lock_irqsave(&hctx->queue->queue_lock, flags);
>
> After some time thought, I think here we need a big comment to explain
> why we need to sync. Because there are other caller of blk_queue_quiesced()
> which do not need to hold ->queue_lock to sync. Then, I am thinking
> is ->queue_lock really easier to be maintained than mb? For developers,
> we still need to care about this, right? I don't see any obvious benefit.
> And the mb approach seems more efficient than spinlock. Something like:
>
> if (!need_run) {
> /* Add a comment here to explain what's going on here. */
> smp_mb();
> need_run = blk_mq_hw_queue_need_run(hctx);
> if (!need_run)
> return;
> }
>
> I am not objecting to your approach, I want to know if you insist on
> barrier-less approach here. If yes, I'm fine with this approach. I can
> use it in v2.
Yes, as I mentioned, the race only exists on two slow code paths,
we seldom use barrier in slow paths, in which traditional lock
can provide a simpler & more readable solution. Anytime,
READ/WRITE dependency implied in any barrier is hard to follow.
Thanks,
Powered by blists - more mailing lists