[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFo_10Q3GOmygejBjW15jk23tbgDNrK2aMV+DvcF5S8oQw@mail.gmail.com>
Date: Sun, 23 Jul 2017 09:20:42 +0200
From: Ulf Hansson <ulf.hansson@...aro.org>
To: Viresh Kumar <viresh.kumar@...aro.org>
Cc: Rafael Wysocki <rjw@...ysocki.net>,
Kevin Hilman <khilman@...nel.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Stephen Boyd <sboyd@...eaurora.org>,
Nishanth Menon <nm@...com>, Rob Herring <robh+dt@...nel.org>,
Lina Iyer <lina.iyer@...aro.org>,
Rajendra Nayak <rnayak@...eaurora.org>,
Sudeep Holla <sudeep.holla@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Len Brown <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
Andy Gross <andy.gross@...aro.org>,
David Brown <david.brown@...aro.org>
Subject: Re: [PATCH V8 1/6] PM / Domains: Add support to select
performance-state of domains
On 21 July 2017 at 11:05, Viresh Kumar <viresh.kumar@...aro.org> wrote:
> On 21-07-17, 10:35, Ulf Hansson wrote:
>> This depends on how drivers are dealing with runtime PM in conjunction
>> with the new pm_genpd_update_performance_state().
>>
>> In case you don't want to manage some of this in genpd, then each
>> driver will have to drop their constraints every time they are about
>> to runtime suspend its device. And restore them at runtime resume.
>>
>> To me, that's seems like a bad idea. Then it's better to make genpd
>> deal with this - somehow.
>
> Right. So we should call the ->set_performance_state() from off/on as
> well. Will do that.
>
>> Yes!
>>
>> On top of that change, you could also add some validation if the
>> get/set callbacks is there are any constraints on how they must be
>> assigned.
>
> I am not sure if I understood that, sorry. What other constraints are
> you talking about ?
Just thinking that if a genpd is about to be added as a subdomain, and
it has ->get_performance_state(), but not ->set_performance_state(),
perhaps we should require its master to have
->set_performance_state().
Anyway, I let you do the thinking of what is and what is not needed here.
[...]
>>
>> My main concern is the order of how you take the looks. We never take
>> a masters lock before the current domain lock.
>
> Right and this patch doesn't break that.
>
>> And when walking the topology, we use the slave links and locks the
>> first master from that list. Continues with that tree, then get back
>> to slave list and pick the next master.
>
> Again, that's how this patch does it.
>
>> If you change that order, we could end getting deadlocks.
>
> And because that order isn't changed at all, we shouldn't have
> deadlocks.
True. Trying to clarify more below...
>
>> >> A general comment is that I think you should look more closely in the
>> >> code of genpd_power_off|on(). And also how it calls the
>> >> ->power_on|off() callbacks.
>> >>
>> >> Depending whether you want to update the performance state of the
>> >> master domain before the subdomain or the opposite, you will find one
>> >> of them being suited for this case as well.
>> >
>> > Isn't it very much similar to that already ? The only major difference
>> > is link->performance_state and I already explained why is it required
>> > to be done that way to avoid deadlocks.
>>
>> No, because you walk the master lists. Thus getting a different order or locks.
>>
>> I did some drawing of this, using the slave links, and I don't see any
>> issues why you can't use that instead.
>
> Damn, I am confused on which part are you talking about. Let me paste
> the code here once again and clarify how this is supposed to work just fine :)
I should have been more clear. Walking the master list, then checking
each link without using locks - why is that safe?
Then even if you think it's safe, then please explain in detail why its needed.
Walking the slave list as being done for power off/on should work
perfectly okay for your case as well. No?
[...]
Kind regards
Uffe
Powered by blists - more mailing lists