[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4637CD25.5070104@rtr.ca>
Date: Tue, 01 May 2007 19:28:37 -0400
From: Mark Lord <lkml@....ca>
To: Linux Kernel <linux-kernel@...r.kernel.org>, marcel@...tmann.org,
Greg KH <gregkh@...e.de>, Andrew Morton <akpm@...l.org>
Subject: [BUG] usb/core/hub.c loops forever on resume from ram due to bluetooth
I have just replaced my primary single-core notebook
with a nearly identical dual-core notebook,
and moved the usb-bluetooth peripheral from the old
machine to the new one.
On the single-core machine, suspend/resume (RAM) worked
fine even with the bluetooth module enabled.
On the new dual-core machine, resuming with bluetooth
enabled results in an infinite(?) lockup in an unbounded
loop in hub_tt_kevent(). With PM debug on, I see
tens of thousands of these messages scrolling on the console:
kernel: usb 5-1: clear tt 4 (9042) error -71
kernel: usb 5-1: clear tt 4 (9042) error -71
kernel: usb 5-1: clear tt 4 (9042) error -71
(over and over and ...)
By restricting iterations on the unbounded loop
the machine is able to resume again.
Greg / Marcel: any words of wisdom?
And we should probably put bounds permanently on that loop:
I devised/used this patch to accomplish it.
Now, I still get close to a thousand or so such
messages, in groups, showing up in syslog,
but at least the system can resume after suspend.
Signed-off-by: Mark Lord <mlord@...ox.com>
--- linux/drivers/usb/core/hub.c.orig 2007-04-26 12:02:47.000000000 -0400
+++ linux/drivers/usb/core/hub.c 2007-05-01 18:48:46.000000000 -0400
@@ -403,9 +403,10 @@
struct usb_hub *hub =
container_of(work, struct usb_hub, tt.kevent);
unsigned long flags;
+ int limit = 500;
spin_lock_irqsave (&hub->tt.lock, flags);
- while (!list_empty (&hub->tt.clear_list)) {
+ while (--limit && !list_empty (&hub->tt.clear_list)) {
struct list_head *temp;
struct usb_tt_clear *clear;
struct usb_device *hdev = hub->hdev;
-----
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists