[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79c89923-f586-79e7-6dfd-c15ceb21f569@suse.de>
Date: Wed, 4 Aug 2021 09:23:49 +0200
From: Hannes Reinecke <hare@...e.de>
To: Daniel Wagner <dwagner@...e.de>, linux-nvme@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org,
James Smart <james.smart@...adcom.com>,
Keith Busch <kbusch@...nel.org>,
Ming Lei <ming.lei@...hat.com>,
Sagi Grimberg <sagi@...mberg.me>,
Wen Xiong <wenxiong@...ibm.com>,
James Smart <jsmart2021@...il.com>
Subject: Re: [PATCH v4 6/8] nvme-fc: fix controller reset hang during traffic
On 8/2/21 1:26 PM, Daniel Wagner wrote:
> From: James Smart <jsmart2021@...il.com>
>
> commit fe35ec58f0d3 ("block: update hctx map when use multiple maps")
> exposed an issue where we may hang trying to wait for queue freeze
> during I/O. We call blk_mq_update_nr_hw_queues which may attempt to freeze
> the queue. However we never started queue freeze when starting the
> reset, which means that we have inflight pending requests that entered the
> queue that we will not complete once the queue is quiesced.
>
> So start a freeze before we quiesce the queue, and unfreeze the queue
> after we successfully connected the I/O queues (the unfreeze is already
> present in the code). blk_mq_update_nr_hw_queues will be called only
> after we are sure that the queue was already frozen.
>
> This follows to how the pci driver handles resets.
>
> This patch added logic introduced in commit 9f98772ba307 "nvme-rdma: fix
> controller reset hang during traffic".
>
> Signed-off-by: James Smart <jsmart2021@...il.com>
> CC: Sagi Grimberg <sagi@...mberg.me>
> [dwagner: call nvme_unfreeze() unconditionally in
> nvme_fc_recreate_io_queues() to match the nvme_start_freeze()]
> Tested-by: Daniel Wagner <dwagner@...e.de>
> Reviewed-by: Daniel Wagner <dwagner@...e.de>
> ---
> drivers/nvme/host/fc.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 133b87db4f1d..b292af0fd655 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2486,6 +2486,7 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
> * (but with error status).
> */
> if (ctrl->ctrl.queue_count > 1) {
> + nvme_start_freeze(&ctrl->ctrl);
> nvme_stop_queues(&ctrl->ctrl);
> nvme_sync_io_queues(&ctrl->ctrl);
> blk_mq_tagset_busy_iter(&ctrl->tag_set,
> @@ -2966,8 +2967,8 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
> return -ENODEV;
> }
> blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
> - nvme_unfreeze(&ctrl->ctrl);
> }
> + nvme_unfreeze(&ctrl->ctrl);
>
> ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
> if (ret)
>
There still is now an imbalance, as we're always calling
'nvme_unfreeze()' (irrespective on the number of queues), but will only
call 'nvme_start_freeze()' if we have more than one queue.
This might lead to an imbalance on the mq_freeze_depth counter.
Wouldn't it be better to move the call to 'nvme_start_freeze()' out of
the if() condition to avoid the imbalance?
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@...e.de +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer
Powered by blists - more mailing lists