[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1190482861.4035.59.camel@chaos>
Date: Sat, 22 Sep 2007 19:41:01 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: David Brownell <david-b@...bell.net>
Cc: Andrew Morton <akpm@...l.org>, LKML <linux-kernel@...r.kernel.org>,
Benedikt Spranger <bene@...utronix.de>,
Stable Team <stable@...nel.org>
Subject: [PATCH] usb-gadget-ether: Prevent oops caused by error interrupt
race -V2 (comments update)
From: Benedikt Spranger <bene@...utronix.de>
eth_start_xmit() can race against a disconnect interrupt in the gadget
device driver, which nukes all pending request. Right now we access the
pending request list unconditionally and dereference the request list
head itself in such a case, which results in an Oops.
Check whether the list is empty before actually dereferencing
dev->tx_reqs.next. Also add a comment for the second list_empty check
further down to avoid confusion.
Long standing bug. Patch should be applied to stable as well.
Signed-off-by: Benedikt Spranger <bene@...utronix.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
diff --git a/drivers/usb/gadget/ether.c b/drivers/usb/gadget/ether.c
index 593e235..f2a7bd5 100644
--- a/drivers/usb/gadget/ether.c
+++ b/drivers/usb/gadget/ether.c
@@ -1989,8 +1989,21 @@ static int eth_start_xmit (struct sk_buff *skb, struct net_device *net)
}
spin_lock_irqsave(&dev->req_lock, flags);
+ /*
+ * dev->tx_reqs may be empty. We raced against a disconnect
+ * interrupt in the gadget device driver, which nuked all
+ * pending requests.
+ */
+ if (list_empty(&dev->tx_reqs)) {
+ netif_stop_queue(net);
+ spin_unlock_irqrestore(&dev->req_lock, flags);
+ return 1;
+ }
+
req = container_of (dev->tx_reqs.next, struct usb_request, list);
list_del (&req->list);
+
+ /* last request in list: stop queue */
if (list_empty (&dev->tx_reqs))
netif_stop_queue (net);
spin_unlock_irqrestore(&dev->req_lock, flags);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists