[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210629074825.s5f2d3ihuyscktg3@beryllium.lan>
Date: Tue, 29 Jun 2021 09:48:25 +0200
From: Daniel Wagner <dwagner@...e.de>
To: Ming Lei <ming.lei@...hat.com>
Cc: linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
James Smart <james.smart@...adcom.com>,
Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...com>,
Sagi Grimberg <sagi@...mberg.me>
Subject: Re: [PATCH 2/2] nvme-fc: Wait with a timeout for queue to freeze
On Tue, Jun 29, 2021 at 09:39:30AM +0800, Ming Lei wrote:
> On Fri, Jun 25, 2021 at 12:16:49PM +0200, Daniel Wagner wrote:
> > Do not wait indifinitly for all queues to freeze. Instead use a
> > timeout and abort the operation if we get stuck.
> >
> > Signed-off-by: Daniel Wagner <dwagner@...e.de>
> > ---
> > drivers/nvme/host/fc.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> > index a9645cd89eca..d8db85aa5417 100644
> > --- a/drivers/nvme/host/fc.c
> > +++ b/drivers/nvme/host/fc.c
> > @@ -2955,7 +2955,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
> > dev_info(ctrl->ctrl.device,
> > "reconnect: revising io queue count from %d to %d\n",
> > prior_ioq_cnt, nr_io_queues);
> > - nvme_wait_freeze(&ctrl->ctrl);
> > + if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
> > + /*
> > + * If we timed out waiting for freeze we are likely to
> > + * be stuck. Fail the controller initialization just
> > + * to be safe.
> > + */
> > + return -ENODEV;
> > + }
> > blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
> > nvme_unfreeze(&ctrl->ctrl);
>
> Can you investigate a bit on why there is the hang? FC shouldn't use
> managed IRQ, so the interrupt won't be shutdown.
>
> blk-mq debugfs may help to dump the requests after the hang is triggered,
> or you still can add debug code in nvme_wait_freeze_timeout() to dump
> all requests if blk-mq debugfs doesn't work.
Sure thing, I'll try to find out why it hangs. The good thing is that I
was able to reliable reproduce it. So let's see.
Powered by blists - more mailing lists