[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1437616850-23266-23-git-send-email-kamal@canonical.com>
Date: Wed, 22 Jul 2015 18:59:00 -0700
From: Kamal Mostafa <kamal@...onical.com>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
kernel-team@...ts.ubuntu.com
Cc: Bart Van Assche <bart.vanassche@...disk.com>,
James Bottomley <JBottomley@...n.com>,
Sagi Grimberg <sagig@...lanox.com>,
Sebastian Parschauer <sebastian.riemer@...fitbricks.com>,
Doug Ledford <dledford@...hat.com>,
Kamal Mostafa <kamal@...onical.com>
Subject: [PATCH 3.13.y-ckt 022/132] scsi_transport_srp: Introduce srp_wait_for_queuecommand()
3.13.11-ckt24 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <bart.vanassche@...disk.com>
commit be34c62ddf39d1931780b07a6f4241393e4ba2ee upstream.
Introduce the helper function srp_wait_for_queuecommand().
Move the definition of scsi_request_fn_active(). Add a comment
above srp_wait_for_queuecommand() that support for scsi-mq needs
to be added.
This patch does not change any functionality. A second call to
srp_wait_for_queuecommand() will be introduced in the next patch.
Signed-off-by: Bart Van Assche <bart.vanassche@...disk.com>
Cc: James Bottomley <JBottomley@...n.com>
Cc: Sagi Grimberg <sagig@...lanox.com>
Cc: Sebastian Parschauer <sebastian.riemer@...fitbricks.com>
Signed-off-by: Doug Ledford <dledford@...hat.com>
Signed-off-by: Kamal Mostafa <kamal@...onical.com>
---
drivers/scsi/scsi_transport_srp.c | 53 +++++++++++++++++++++++----------------
1 file changed, 31 insertions(+), 22 deletions(-)
diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
index 2700a5a..33fee20 100644
--- a/drivers/scsi/scsi_transport_srp.c
+++ b/drivers/scsi/scsi_transport_srp.c
@@ -389,6 +389,36 @@ static void srp_reconnect_work(struct work_struct *work)
}
}
+/**
+ * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
+ * @shost: SCSI host for which to count the number of scsi_request_fn() callers.
+ *
+ * To do: add support for scsi-mq in this function.
+ */
+static int scsi_request_fn_active(struct Scsi_Host *shost)
+{
+ struct scsi_device *sdev;
+ struct request_queue *q;
+ int request_fn_active = 0;
+
+ shost_for_each_device(sdev, shost) {
+ q = sdev->request_queue;
+
+ spin_lock_irq(q->queue_lock);
+ request_fn_active += q->request_fn_active;
+ spin_unlock_irq(q->queue_lock);
+ }
+
+ return request_fn_active;
+}
+
+/* Wait until ongoing shost->hostt->queuecommand() calls have finished. */
+static void srp_wait_for_queuecommand(struct Scsi_Host *shost)
+{
+ while (scsi_request_fn_active(shost))
+ msleep(20);
+}
+
static void __rport_fail_io_fast(struct srp_rport *rport)
{
struct Scsi_Host *shost = rport_to_shost(rport);
@@ -501,26 +531,6 @@ void srp_start_tl_fail_timers(struct srp_rport *rport)
EXPORT_SYMBOL(srp_start_tl_fail_timers);
/**
- * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
- */
-static int scsi_request_fn_active(struct Scsi_Host *shost)
-{
- struct scsi_device *sdev;
- struct request_queue *q;
- int request_fn_active = 0;
-
- shost_for_each_device(sdev, shost) {
- q = sdev->request_queue;
-
- spin_lock_irq(q->queue_lock);
- request_fn_active += q->request_fn_active;
- spin_unlock_irq(q->queue_lock);
- }
-
- return request_fn_active;
-}
-
-/**
* srp_reconnect_rport() - reconnect to an SRP target port
*
* Blocks SCSI command queueing before invoking reconnect() such that
@@ -554,8 +564,7 @@ int srp_reconnect_rport(struct srp_rport *rport)
if (res)
goto out;
scsi_target_block(&shost->shost_gendev);
- while (scsi_request_fn_active(shost))
- msleep(20);
+ srp_wait_for_queuecommand(shost);
res = i->f->reconnect(rport);
pr_debug("%s (state %d): transport.reconnect() returned %d\n",
dev_name(&shost->shost_gendev), rport->state, res);
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists