[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.0910071720440.4512-100000@iolanthe.rowland.org>
Date: Wed, 7 Oct 2009 17:34:12 -0400 (EDT)
From: Alan Stern <stern@...land.harvard.edu>
To: Alan Cox <alan@...rguk.ukuu.org.uk>
cc: Oliver Neukum <oliver@...kum.org>, Greg KH <greg@...ah.com>,
Kernel development list <linux-kernel@...r.kernel.org>,
USB list <linux-usb@...r.kernel.org>
Subject: Re: [PATCH 4/5] usb_serial: Kill port mutex
On Wed, 7 Oct 2009, Alan Cox wrote:
> On Wed, 7 Oct 2009 22:56:20 +0200
> Oliver Neukum <oliver@...kum.org> wrote:
>
> > Am Mittwoch, 7. Oktober 2009 20:52:21 schrieb Alan Stern:
> > > However in the option and sierra drivers there is a perverse path from
> > > close to resume: Both their close methods call
> > > usb_autopm_get_interface(). This could be removed without much
> > > trouble; perhaps we should do so.
> >
> > I am afraid this won't do in the long run. Some drivers will want to
> > shut down devices by communicating with them in close().
>
> drivers/serial will need a power management hook to use
> tty_port_{open/close} so perhaps that can be covered for both. In the
> serial case it needs to kick stuff out of PCI D3 mostly and could
> probably be fudged but if USB needs it perhaps it should be explicit.
I'm losing track of the original point of this thread. IIRC, the
problem is how the resume method should know whether or not to submit
the receive URB(s). It can't afford to acquire the port mutex because
it might be called by open or close, at which time the mutex is already
held.
Other schemes could work, but to me it seems simplest to rely on a flag
protected by a spinlock. The flag would mean "URBs are supposed to be
queued unless we are suspended". It would be set by open and
unthrottle, and cleared by close and throttle.
Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists