[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YNp4GyXwOlJeqtby@T590>
Date: Tue, 29 Jun 2021 09:32:11 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Daniel Wagner <dwagner@...e.de>
Cc: linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
James Smart <james.smart@...adcom.com>,
Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...com>,
Sagi Grimberg <sagi@...mberg.me>
Subject: Re: [PATCH 1/2] nvme-fc: Update hardware queues before using them
On Fri, Jun 25, 2021 at 12:16:48PM +0200, Daniel Wagner wrote:
> In case the number of hardware queues changes, do the update the
> tagset and ctx to hctx first before using the mapping to recreate and
> connnect the IO queues.
>
> Signed-off-by: Daniel Wagner <dwagner@...e.de>
> ---
> drivers/nvme/host/fc.c | 16 ++++++++--------
> 1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 8a3c4814d21b..a9645cd89eca 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2951,14 +2951,6 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
> if (ctrl->ctrl.queue_count == 1)
> return 0;
>
> - ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
> - if (ret)
> - goto out_free_io_queues;
> -
> - ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
> - if (ret)
> - goto out_delete_hw_queues;
> -
> if (prior_ioq_cnt != nr_io_queues) {
> dev_info(ctrl->ctrl.device,
> "reconnect: revising io queue count from %d to %d\n",
> @@ -2968,6 +2960,14 @@ nvme_fc_recreate_io_queues(struct nvme_fc_ctrl *ctrl)
> nvme_unfreeze(&ctrl->ctrl);
> }
>
> + ret = nvme_fc_create_hw_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
> + if (ret)
> + goto out_free_io_queues;
> +
> + ret = nvme_fc_connect_io_queues(ctrl, ctrl->ctrl.sqsize + 1);
> + if (ret)
> + goto out_delete_hw_queues;
> +
> return 0;
>
> out_delete_hw_queues:
> --
> 2.29.2
>
This way may cause correct hctx_idx to be passed to blk_mq_alloc_request_hctx(), so:
Reviewed-by: Ming Lei <ming.lei@...hat.com>
Thanks,
Ming
Powered by blists - more mailing lists