[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210114203148.246656-20-tyreld@linux.ibm.com>
Date: Thu, 14 Jan 2021 14:31:46 -0600
From: Tyrel Datwyler <tyreld@...ux.ibm.com>
To: james.bottomley@...senpartnership.com
Cc: martin.petersen@...cle.com, linux-scsi@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
brking@...ux.ibm.com, Tyrel Datwyler <tyreld@...ux.ibm.com>,
Brian King <brking@...ux.vnet.ibm.com>
Subject: [PATCH v5 19/21] ibmvfc: purge scsi channels after transport loss/reset
Grab the queue and list lock for each Sub-CRQ and add any uncompleted
events to the host purge list.
Signed-off-by: Tyrel Datwyler <tyreld@...ux.ibm.com>
Reviewed-by: Brian King <brking@...ux.vnet.ibm.com>
---
drivers/scsi/ibmvscsi/ibmvfc.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index 5ca8fcafd1d5..d314dffaafc4 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi/ibmvfc.c
@@ -1056,7 +1056,13 @@ static void ibmvfc_fail_request(struct ibmvfc_event *evt, int error_code)
static void ibmvfc_purge_requests(struct ibmvfc_host *vhost, int error_code)
{
struct ibmvfc_event *evt, *pos;
+ struct ibmvfc_queue *queues = vhost->scsi_scrqs.scrqs;
unsigned long flags;
+ int hwqs = 0;
+ int i;
+
+ if (vhost->using_channels)
+ hwqs = vhost->scsi_scrqs.active_queues;
ibmvfc_dbg(vhost, "Purging all requests\n");
spin_lock_irqsave(&vhost->crq.l_lock, flags);
@@ -1064,6 +1070,16 @@ static void ibmvfc_purge_requests(struct ibmvfc_host *vhost, int error_code)
ibmvfc_fail_request(evt, error_code);
list_splice_init(&vhost->crq.sent, &vhost->purge);
spin_unlock_irqrestore(&vhost->crq.l_lock, flags);
+
+ for (i = 0; i < hwqs; i++) {
+ spin_lock_irqsave(queues[i].q_lock, flags);
+ spin_lock(&queues[i].l_lock);
+ list_for_each_entry_safe(evt, pos, &queues[i].sent, queue_list)
+ ibmvfc_fail_request(evt, error_code);
+ list_splice_init(&queues[i].sent, &vhost->purge);
+ spin_unlock(&queues[i].l_lock);
+ spin_unlock_irqrestore(queues[i].q_lock, flags);
+ }
}
/**
--
2.27.0
Powered by blists - more mailing lists