[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJZ5v0gNCFSwfmwaeGevbzica95ZD8CH-FUxD_2VN607-EhXCQ@mail.gmail.com>
Date: Wed, 11 Jun 2014 20:44:31 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: jonghwa3.lee@...sung.com
Cc: Ulf Hansson <ulf.hansson@...aro.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Pavel Machek <pavel@....cz>, Len Brown <len.brown@...el.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
cw00.choi@...sung.com, Myungjoo Ham <myungjoo.ham@...sung.com>,
jy0922.shim@...sung.com, inki.dae@...sung.com
Subject: Re: [RFC PATCH] PM: Domain: Add flag to assure that device's runtime
pm callback is called at once.
On Wed, Jun 11, 2014 at 12:31 PM, <jonghwa3.lee@...sung.com> wrote:
> Hi Ulf,
> On 2014년 06월 11일 17:27, Ulf Hansson wrote:
>
>> On 11 June 2014 02:33, Jonghwa Lee <jonghwa3.lee@...sung.com> wrote:
>>> When device uses generic pm domain, its own runtime suspend callback will be
>>> executed only when all devices in the same domain are suspended. However, some
>>> device needs sychronized runtime suspend and not to be deffered.
>>> For those devices, Generic pm domain adds new API for adding device with flag,
>>> which shows that it guerantees driver's runtime suspend will be executed right
>>> away not at the time domain is really powered off.
>>> Existed API, pm_genpd_add_device(), now adds device with flag representing this
>>> device is not needed sychrnoized callback. It will work as same as before.
>>
>> Hi Jonghwa,
>>
>> I understand you have an issue of the behaviour of genpd here and I
>> agree we need a solution, but I am not sure this is the correct one.
>> :-)
>>
>> The current behaviour of how genpd deals with runtime PM suspend
>> callbacks has two severe issues:
>>
>> 1 ) It prevents fine grained power management within a domain. Simply
>> because those resources that are controlled on levels below the
>> pm_domain, can't be put in low power state until the complete domain
>> will be dropped.
>> 2 ) All devices within a domain will be runtime PM suspended at the
>> same time. This causes a thundering herd problem since latencies gets
>> accumulated for each device in a domain.
>>
>> Instead I think we should try to change the default behaviour of genpd
>> and let it invoke the runtime PM callbacks immediately and not wait
>> for the domain to be dropped. Do you think that could work?
>>
>
>
> Yes, I think it could. I didn't understand why genpd prohibits device to be
> suspended separately but puts all devices suspended altogether.
The reason why was because it assumed device-specific suspend to be carried
out by genpd_stop_dev() and the device state to be saved by the runtime
suspend callback of the driver. Of course, it only makes sense to save
device states if power is going to be removed from the domain and that's
why it was done this way.
> But, I'm just
> afraid to change genpd's behavior totally due to my own inconvenience.
>
> And addition to above problems, there is one more problem due to genpd's current
> bust suspending.
>
> 3) It would break the device dependency. Even if devices request to be suspended
> sequentially considering their dependencies, genpd simply ignores it and let
> them suspended in order of registration. It means suspending order absolutely
> depends on when device is bound to genpd.
>
> Observing 3 definite problems, it looks genpd should be changed. However,
> someone might want to keep it work as like before, who knows? I respectfully ask
> Rafael's opinion.
Well, I don't work on it directly any more.
I'm also not against changing it if that's going to help. What you need to do,
though, is to audit all code using it and figure out how your changes are going
to affect that code.
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists