[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150504195137.GB3365187@devbig257.prn2.facebook.com>
Date: Mon, 4 May 2015 12:51:48 -0700
From: Shaohua Li <shli@...com>
To: Jens Axboe <axboe@...com>
CC: <linux-kernel@...r.kernel.org>, <Kernel-team@...com>
Subject: Re: [PATCH] blk-mq: don't lose requests if a stopped queue restarts
On Mon, May 04, 2015 at 01:17:19PM -0600, Jens Axboe wrote:
> On 05/02/2015 06:31 PM, Shaohua Li wrote:
> >Normally if driver is busy to dispatch a request the logic is like below:
> >block layer: driver:
> > __blk_mq_run_hw_queue
> >a. blk_mq_stop_hw_queue
> >b. rq add to ctx->dispatch
> >
> >later:
> >1. blk_mq_start_hw_queue
> >2. __blk_mq_run_hw_queue
> >
> >But it's possible step 1-2 runs between a and b. And since rq isn't in
> >ctx->dispatch yet, step 2 will not run rq. The rq might get lost if
> >there are no subsequent requests kick in.
>
> Good catch! But the patch introduces a potentially never ending loop
> in __blk_mq_run_hw_queue(). Not sure how we can fully close it, but
> it might be better to punt the re-run after adding the requests back
> to the worker. That would turn a potential busy loop (until requests
> complete) into something with nicer behavior, at least. Ala
>
> if (!test_bit(BLK_MQ_S_STOPPED, &hctx->state))
> kblockd_schedule_delayed_work_on(blk_mq_hctx_next_cpu(hctx),
> &hctx->run_work, 0);
My first version of the patch is like this, but I changed my mind later.
The assumption is driver will stop queue if it's busy to dispatch
request. If the driver is buggy, we will have the endless loop here.
Should we assume drivers will not do the right thing?
Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists