[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b048ede5-e673-4ba9-3c28-df077aa4467a@linux.vnet.ibm.com>
Date: Fri, 4 Dec 2020 15:35:25 -0600
From: Brian King <brking@...ux.vnet.ibm.com>
To: Tyrel Datwyler <tyreld@...ux.ibm.com>,
james.bottomley@...senpartnership.com
Cc: martin.petersen@...cle.com, linux-scsi@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
brking@...ux.ibm.com
Subject: Re: [PATCH v3 18/18] ibmvfc: drop host lock when completing commands
in CRQ
On 12/2/20 8:08 PM, Tyrel Datwyler wrote:
> The legacy CRQ holds the host lock the even while completing commands.
> This presents a problem when in legacy single queue mode and
> nr_hw_queues is greater than one since calling scsi_done() introduces
> the potential for deadlock.
>
> If nr_hw_queues is greater than one drop the hostlock in the legacy CRQ
> path when completing a command.
>
> Signed-off-by: Tyrel Datwyler <tyreld@...ux.ibm.com>
> ---
> drivers/scsi/ibmvscsi/ibmvfc.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
> index e499599662ec..e2200bdff2be 100644
> --- a/drivers/scsi/ibmvscsi/ibmvfc.c
> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c
> @@ -2969,6 +2969,7 @@ static void ibmvfc_handle_crq(struct ibmvfc_crq *crq, struct ibmvfc_host *vhost)
> {
> long rc;
> struct ibmvfc_event *evt = (struct ibmvfc_event *)be64_to_cpu(crq->ioba);
> + unsigned long flags;
>
> switch (crq->valid) {
> case IBMVFC_CRQ_INIT_RSP:
> @@ -3039,7 +3040,12 @@ static void ibmvfc_handle_crq(struct ibmvfc_crq *crq, struct ibmvfc_host *vhost)
> del_timer(&evt->timer);
> list_del(&evt->queue);
> ibmvfc_trc_end(evt);
> - evt->done(evt);
> + if (nr_scsi_hw_queues > 1) {
> + spin_unlock_irqrestore(vhost->host->host_lock, flags);
> + evt->done(evt);
> + spin_lock_irqsave(vhost->host->host_lock, flags);
> + } else
> + evt->done(evt);
Similar comment here as previously. The flags parameter is an output for
spin_lock_irqsave but an input for spin_unlock_irqrestore. You'll need
to rethink the locking here. You could just do a spin_unlock_irq / spin_lock_irq
here and that would probably be OK, but probably isn't the best.
> }
>
> /**
>
--
Brian King
Power Linux I/O
IBM Linux Technology Center
Powered by blists - more mailing lists