[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c026265b1b15473e41ea3c8cd72abf07@mail.gmail.com>
Date: Mon, 4 Jul 2011 16:31:45 +0530
From: Partha Basak <p-basak2@...com>
To: balbi@...com
Cc: Alan Stern <stern@...land.harvard.edu>,
Keshava Munegowda <keshava_mgowda@...com>,
linux-usb@...r.kernel.org, linux-omap@...r.kernel.org,
linux-kernel@...r.kernel.org, Anand Gadiyar <gadiyar@...com>,
sameo@...ux.intel.com, parthab@...ia.ti.com, tony@...mide.com,
Kevin Hilman <khilman@...com>,
Benoit Cousson <b-cousson@...com>, paul@...an.com,
johnstul@...ibm.com, Vishwanath Sripathy <vishwanath.bs@...com>
Subject: RE: [PATCH 6/6 v2] arm: omap: usb: global Suspend and resume support
of ehci and ohci
>-----Original Message-----
>From: linux-omap-owner@...r.kernel.org [mailto:linux-omap-
>owner@...r.kernel.org] On Behalf Of Felipe Balbi
>Sent: Monday, July 04, 2011 3:01 PM
>To: Partha Basak
>Cc: balbi@...com; Alan Stern; Keshava Munegowda; linux-
>usb@...r.kernel.org; linux-omap@...r.kernel.org; linux-
>kernel@...r.kernel.org; Anand Gadiyar; sameo@...ux.intel.com;
>parthab@...ia.ti.com; tony@...mide.com; Kevin Hilman; Benoit Cousson;
>paul@...an.com; johnstul@...ibm.com; Vishwanath Sripathy
>Subject: Re: [PATCH 6/6 v2] arm: omap: usb: global Suspend and resume
>support of ehci and ohci
>
>Hi,
>
>On Mon, Jul 04, 2011 at 02:56:30PM +0530, Partha Basak wrote:
>> >> Both for EHCI & OHCI, the clocks are owned by the parent (uhh-tll).
>> >>
>> >> Calling pm_runtime_put_sync(dev->parent) within omap_ehci_suspend
>> >> will turn-off the parent clocks in the Suspend path.
>> >>
>> >> Similarly, calling pm_runtime_get_sync(dev->parent) within
>> >> omap_ehci_resume will turn-on the parent clocks in the resume path.
>> >>
>> >> This way, all reference counting are implicit within the Runtime PM
>> >> layer and takes care of all combinations of only EHCI insmoded,
>> >> OHCI insmoded, both insmoded etc.
>> >>
>> >> When both EHCI & OHCI are suspended, parent clocks will actually be
>> >> turned OFF and vice-versa.
>> >
>> >not sure this is necessary. I would expect:
>> >
>> >pm_runtime_get_sync(dev) to propagate up the parent tree and enable
>> >all necessary resources to get the child in a working state. IOW, you
>> >shouldn't need to manuall access the parent device.
>> >
>> Refer to the description in Patch(5/6) <snip> In fact, the runtime
>> framework takes care the get sync and put sync of the child in turn
>> call the get sync and put sync of parent too; but calling get sync and
>> put sync of parent is by ASYNC mode; This mode queues the work item in
>> runtime pm work queue, which not getting scheduled in case of global
>> suspend path.
>> <snip>
>> This approach was tried, but did not work in the Suspend path
>
>sounds to me like a bug on pm runtime ? If you're calling
>pm_runtime_*_sync() family, shouldn't all calls be _sync() too ?
>
>> static int rpm_suspend(struct device *dev, int rpmflags)
>> __releases(&dev->power.lock) __acquires(&dev->power.lock) { .
>> .
>> .
>> no_callback:
>> .
>> .
>> .
>> /* Maybe the parent is now able to suspend. */
>> if (parent && !parent->power.ignore_children &&
>> !dev->power.irq_safe) {
>> spin_unlock(&dev->power.lock);
>>
>> spin_lock(&parent->power.lock);
>> rpm_idle(parent, RPM_ASYNC);
>
>to me this is bogus, if you called pm_runtime_put_sync() should should
>be sync too. Shouldn't it ?
>
>> spin_unlock(&parent->power.lock);
>>
>> spin_lock(&dev->power.lock);
>> }
>> This is the reason of directly calling the parent Runtime PM calls
>> from the children.
>> If directly calling Runtime PM APIs with parent dev-pointer isn't
>> acceptable, this can be achieved by exporting wrapper APIs from the
>> parent and calling them from the chidren .suspend/.resume routines.
>
>Still no good, IMHO.
Kevin, any comments?
Shouldn't the framework ensure that if put_sync(dev) is called
it should ensure that the parent is rmp_idled synchronously?
>
>--
>balbi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists