[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d4edf17-2be6-b7c3-a6fd-b439e5e52eab@gmail.com>
Date: Thu, 12 Aug 2021 13:03:07 -0700
From: James Smart <jsmart2021@...il.com>
To: Hannes Reinecke <hare@...e.de>, Daniel Wagner <dwagner@...e.de>,
linux-nvme@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org,
James Smart <james.smart@...adcom.com>,
Keith Busch <kbusch@...nel.org>,
Ming Lei <ming.lei@...hat.com>,
Sagi Grimberg <sagi@...mberg.me>,
Wen Xiong <wenxiong@...ibm.com>
Subject: Re: [PATCH v4 6/8] nvme-fc: fix controller reset hang during traffic
On 8/4/2021 12:23 AM, Hannes Reinecke wrote:
> On 8/2/21 1:26 PM, Daniel Wagner wrote:
>> From: James Smart <jsmart2021@...il.com>
>>
>> commit fe35ec58f0d3 ("block: update hctx map when use multiple maps")
>> exposed an issue where we may hang trying to wait for queue freeze
>> during I/O. We call blk_mq_update_nr_hw_queues which may attempt to freeze
>> the queue. However we never started queue freeze when starting the
>> reset, which means that we have inflight pending requests that entered the
>> queue that we will not complete once the queue is quiesced.
>>
>> So start a freeze before we quiesce the queue, and unfreeze the queue
>> after we successfully connected the I/O queues (the unfreeze is already
>> present in the code). blk_mq_update_nr_hw_queues will be called only
>> after we are sure that the queue was already frozen.
>>
>> This follows to how the pci driver handles resets.
>>
>> This patch added logic introduced in commit 9f98772ba307 "nvme-rdma: fix
>> controller reset hang during traffic".
>>
>> Signed-off-by: James Smart <jsmart2021@...il.com>
>> CC: Sagi Grimberg <sagi@...mberg.me>
>> [dwagner: call nvme_unfreeze() unconditionally in
>> nvme_fc_recreate_io_queues() to match the nvme_start_freeze()]
>> Tested-by: Daniel Wagner <dwagner@...e.de>
>> Reviewed-by: Daniel Wagner <dwagner@...e.de>
>> ---
>> drivers/nvme/host/fc.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
>> index 133b87db4f1d..b292af0fd655 100644
>> --- a/drivers/nvme/host/fc.c
>> +++ b/drivers/nvme/host/fc.c
>> @@ -2486,6 +2486,7 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
>> * (but with error status).
>> */
>> if (ctrl->ctrl.queue_count > 1) {
>> + nvme_start_freeze(&ctrl->ctrl);
>> nvme_stop_queues(&ctrl->ctrl);
>> nvme_sync_io_queues(&ctrl->ctrl);
>> blk_mq_tagset_busy_iter(&ctrl->tag_set,
>> @@ -2966,8 +2967,8 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
>> return -ENODEV;
>> }
>> blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
>> - nvme_unfreeze(&ctrl->ctrl);
>> }
>> + nvme_unfreeze(&ctrl->ctrl);
>>
>> ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
>> if (ret)
>>
> There still is now an imbalance, as we're always calling
> 'nvme_unfreeze()' (irrespective on the number of queues), but will only
> call 'nvme_start_freeze()' if we have more than one queue.
>
> This might lead to an imbalance on the mq_freeze_depth counter.
> Wouldn't it be better to move the call to 'nvme_start_freeze()' out of
> the if() condition to avoid the imbalance?
>
> Cheers,
Daniel,
try this. Moves the location of the freeze and will always pair with the
unfreeze.
--- fc.c 2021-08-12 12:33:33.273278611 -0700
+++ fc.c.new 2021-08-12 13:01:16.185817238 -0700
@@ -2965,9 +2965,10 @@ nvme_fc_recreate_io_queues(struct nvme_f
prior_ioq_cnt, nr_io_queues);
nvme_wait_freeze(&ctrl->ctrl);
blk_mq_update_nr_hw_queues(&ctrl->tag_set, nr_io_queues);
- nvme_unfreeze(&ctrl->ctrl);
}
+ nvme_unfreeze(&ctrl->ctrl);
+
return 0;
out_delete_hw_queues:
@@ -3206,6 +3207,9 @@ nvme_fc_delete_association(struct nvme_f
ctrl->iocnt = 0;
spin_unlock_irqrestore(&ctrl->lock, flags);
+ if (ctrl->ctrl.queue_count > 1)
+ nvme_start_freeze(&ctrl->ctrl);
+
__nvme_fc_abort_outstanding_ios(ctrl, false);
/* kill the aens as they are a separate path */
Powered by blists - more mailing lists