[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1303141719450.1194-100000@iolanthe.rowland.org>
Date: Thu, 14 Mar 2013 17:33:07 -0400 (EDT)
From: Alan Stern <stern@...land.harvard.edu>
To: Soeren Moch <smoch@....de>
cc: Arnd Bergmann <arnd@...db.de>,
USB list <linux-usb@...r.kernel.org>,
Jason Cooper <jason@...edaemon.net>,
Andrew Lunn <andrew@...n.ch>,
Sebastian Hesselbarth <sebastian.hesselbarth@...il.com>,
<linux-mm@...ck.org>,
Kernel development list <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH] USB: EHCI: fix for leaking isochronous data
On Thu, 14 Mar 2013, Soeren Moch wrote:
> > If the memory really is being leaked here in some sort of systematic
> > way, we may be able to see it in your debugging output after a few
> > seconds.
> >
>
> OK, here are the first seconds of the log. But the buffer exhaustion
> usually occurs after several hours of runtime...
The log shows a 1-1 match between allocations and deallocations, except
for three excess allocations about 45 lines before the end. I have no
idea what's up with those. They may be an artifact arising from where
you stopped copying the log data.
There are as many as 400 iTDs being allocated before any are freed.
That seems like a lot. Are they all for the same isochronous endpoint?
What's the endpoint's period? How often are URBs submitted?
In general, there shouldn't be more than a couple of millisecond's
worth of iTDs allocated for any endpoint, depending on how many URBs
are in the pipeline at any time.
Maybe a better way to go about this is, instead of printing out every
allocation and deallocation, to keep a running counter. You could have
the driver print out the value of this counter every minute or so. Any
time the device isn't in use, the counter should be 0.
Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists