[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20121028231546.198682183@decadent.org.uk>
Date: Sun, 28 Oct 2012 23:16:06 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: akpm@...ux-foundation.org, alan@...rguk.ukuu.org.uk,
"K. Y. Srinivasan" <kys@...rosoft.com>,
James Bottomley <JBottomley@...allels.com>
Subject: [ 030/105] [SCSI] storvsc: Account for in-transit packets in the RESET path
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: "K. Y. Srinivasan" <kys@...rosoft.com>
commit 5c1b10ab7f93d24f29b5630286e323d1c5802d5c upstream.
Properly account for I/O in transit before returning from the RESET call.
In the absense of this patch, we could have a situation where the host may
respond to a command that was issued prior to the issuance of the RESET
command at some arbitrary time after responding to the RESET command.
Currently, the host does not do anything with the RESET command.
Signed-off-by: K. Y. Srinivasan <kys@...rosoft.com>
Signed-off-by: James Bottomley <JBottomley@...allels.com>
[bwh: Backported to 3.2: adjust filename, context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
drivers/staging/hv/storvsc_drv.c | 5 +++++
1 file changed, 5 insertions(+)
--- a/drivers/staging/hv/storvsc_drv.c
+++ b/drivers/staging/hv/storvsc_drv.c
@@ -1043,7 +1043,12 @@ static int storvsc_host_reset(struct hv_
/*
* At this point, all outstanding requests in the adapter
* should have been flushed out and return to us
+ * There is a potential race here where the host may be in
+ * the process of responding when we return from here.
+ * Just wait for all in-transit packets to be accounted for
+ * before we return from here.
*/
+ storvsc_wait_to_drain(stor_device);
cleanup:
return ret;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists