[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <451323f3-e778-81d0-38e9-5f098ed3426c@linux.vnet.ibm.com>
Date: Thu, 25 Feb 2021 15:17:50 -0600
From: Brian King <brking@...ux.vnet.ibm.com>
To: Tyrel Datwyler <tyreld@...ux.ibm.com>,
james.bottomley@...senpartnership.com
Cc: martin.petersen@...cle.com, linux-scsi@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
brking@...ux.ibm.com
Subject: Re: [PATCH v2 5/5] ibmvfc: reinitialize sub-CRQs and perform channel
enquiry after LPM
On 2/25/21 2:48 PM, Tyrel Datwyler wrote:
> A live partition migration (LPM) results in a CRQ disconnect similar to
> a hard reset. In this LPM case the hypervisor moslty perserves the CRQ
> transport such that it simply needs to be reenabled. However, the
> capabilities may have changed such as fewer channels, or no channels at
> all. Further, its possible that there may be sub-CRQ support, but no
> channel support. The CRQ reenable path currently doesn't take any of
> this into consideration.
>
> For simpilicty release and reinitialize sub-CRQs during reenable, and
> set do_enquiry and using_channels with the appropriate values to trigger
> channel renegotiation.
>
> Signed-off-by: Tyrel Datwyler <tyreld@...ux.ibm.com>
> ---
> drivers/scsi/ibmvscsi/ibmvfc.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
> index 4ac2c442e1e2..9ae6be56e375 100644
> --- a/drivers/scsi/ibmvscsi/ibmvfc.c
> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c
> @@ -903,6 +903,9 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost)
> {
> int rc = 0;
> struct vio_dev *vdev = to_vio_dev(vhost->dev);
> + unsigned long flags;
> +
> + ibmvfc_release_sub_crqs(vhost);
>
> /* Re-enable the CRQ */
> do {
> @@ -914,6 +917,16 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost)
> if (rc)
> dev_err(vhost->dev, "Error enabling adapter (rc=%d)\n", rc);
>
> + spin_lock_irqsave(vhost->host->host_lock, flags);
> + spin_lock(vhost->crq.q_lock);
> + vhost->do_enquiry = 1;
> + vhost->using_channels = 0;
> +
> + ibmvfc_init_sub_crqs(vhost);
> +
> + spin_unlock(vhost->crq.q_lock);
> + spin_unlock_irqrestore(vhost->host->host_lock, flags);
ibmvfc_init_sub_crqs can sleep, for multiple reasons, so you can't hold
a lock when you call it. There is a GFP_KERNEL allocation in it, and the
patch before this one adds an msleep in an error path.
Thanks,
Brian
--
Brian King
Power Linux I/O
IBM Linux Technology Center
Powered by blists - more mailing lists