[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190509181254.988450875@linuxfoundation.org>
Date: Thu, 9 May 2019 20:42:14 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Alan Stern <stern@...land.harvard.edu>,
Seth Bollinger <Seth.Bollinger@...i.com>,
Ming Lei <tom.leiming@...il.com>
Subject: [PATCH 4.9 22/28] usb-storage: Set virt_boundary_mask to avoid SG overflows
From: Alan Stern <stern@...land.harvard.edu>
commit 747668dbc061b3e62bc1982767a3a1f9815fcf0e upstream.
The USB subsystem has always had an unusual requirement for its
scatter-gather transfers: Each element in the scatterlist (except the
last one) must have a length divisible by the bulk maxpacket size.
This is a particular issue for USB mass storage, which uses SG lists
created by the block layer rather than setting up its own.
So far we have scraped by okay because most devices have a logical
block size of 512 bytes or larger, and the bulk maxpacket sizes for
USB 2 and below are all <= 512. However, USB 3 has a bulk maxpacket
size of 1024. Since the xhci-hcd driver includes native SG support,
this hasn't mattered much. But now people are trying to use USB-3
mass storage devices with USBIP, and the vhci-hcd driver currently
does not have full SG support.
The result is an overflow error, when the driver attempts to implement
an SG transfer of 63 512-byte blocks as a single
3584-byte (7 blocks) transfer followed by seven 4096-byte (8 blocks)
transfers. The device instead sends 31 1024-byte packets followed by
a 512-byte packet, and this overruns the first SG buffer.
Ideally this would be fixed by adding better SG support to vhci-hcd.
But for now it appears we can work around the problem by
asking the block layer to respect the maxpacket limitation, through
the use of the virt_boundary_mask.
Signed-off-by: Alan Stern <stern@...land.harvard.edu>
Reported-by: Seth Bollinger <Seth.Bollinger@...i.com>
Tested-by: Seth Bollinger <Seth.Bollinger@...i.com>
CC: Ming Lei <tom.leiming@...il.com>
Cc: stable <stable@...r.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/usb/storage/scsiglue.c | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
--- a/drivers/usb/storage/scsiglue.c
+++ b/drivers/usb/storage/scsiglue.c
@@ -81,6 +81,7 @@ static const char* host_info(struct Scsi
static int slave_alloc (struct scsi_device *sdev)
{
struct us_data *us = host_to_us(sdev->host);
+ int maxp;
/*
* Set the INQUIRY transfer length to 36. We don't use any of
@@ -90,20 +91,17 @@ static int slave_alloc (struct scsi_devi
sdev->inquiry_len = 36;
/*
- * USB has unusual DMA-alignment requirements: Although the
- * starting address of each scatter-gather element doesn't matter,
- * the length of each element except the last must be divisible
- * by the Bulk maxpacket value. There's currently no way to
- * express this by block-layer constraints, so we'll cop out
- * and simply require addresses to be aligned at 512-byte
- * boundaries. This is okay since most block I/O involves
- * hardware sectors that are multiples of 512 bytes in length,
- * and since host controllers up through USB 2.0 have maxpacket
- * values no larger than 512.
- *
- * But it doesn't suffice for Wireless USB, where Bulk maxpacket
- * values can be as large as 2048. To make that work properly
- * will require changes to the block layer.
+ * USB has unusual scatter-gather requirements: the length of each
+ * scatterlist element except the last must be divisible by the
+ * Bulk maxpacket value. Fortunately this value is always a
+ * power of 2. Inform the block layer about this requirement.
+ */
+ maxp = usb_maxpacket(us->pusb_dev, us->recv_bulk_pipe, 0);
+ blk_queue_virt_boundary(sdev->request_queue, maxp - 1);
+
+ /*
+ * Some host controllers may have alignment requirements.
+ * We'll play it safe by requiring 512-byte alignment always.
*/
blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
Powered by blists - more mailing lists