[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181130095912.7hutbsrihpg4gez4@vireshk-i7>
Date: Fri, 30 Nov 2018 15:29:12 +0530
From: Viresh Kumar <viresh.kumar@...aro.org>
To: Ulf Hansson <ulf.hansson@...aro.org>
Cc: "Rafael J. Wysocki" <rjw@...ysocki.net>,
Kevin Hilman <khilman@...nel.org>, Pavel Machek <pavel@....cz>,
Len Brown <len.brown@...el.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Rajendra Nayak <rnayak@...eaurora.org>,
Niklas Cassel <niklas.cassel@...aro.org>,
Linux PM <linux-pm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH V2 5/5] PM / Domains: Propagate performance state updates
On 30-11-18, 10:44, Ulf Hansson wrote:
> On Mon, 26 Nov 2018 at 09:10, Viresh Kumar <viresh.kumar@...aro.org> wrote:
> >
> > This commit updates genpd core to start propagating performance state
> > updates to master domains that have their set_performance_state()
> > callback set.
> >
> > A genpd handles two type of performance states now. The first one is the
> > performance state requirement put on the genpd by the devices and
> > sub-domains under it, which is already represented by
> > genpd->performance_state.
>
> This isn't correct.
>
> Currently genpd->performance_state reflects the aggregation of the
> state requirements from each device that is attached to it. Not from
> subdomains.
Okay, will improve the changelog.
> > The second type, introduced in this commit, is
> > the performance state requirement(s) put by the genpd on its master
> > domain(s). There is a separate value required for each master that the
> > genpd has and so a new field is added to the struct gpd_link
> > (link->performance_state), which represents the link between a genpd and
> > its master. The struct gpd_link also got another field
> > prev_performance_state, which is used by genpd core as a temporary
> > variable during transitions.
> >
> > We need to propagate setting performance state while powering-on a genpd
> > as well, as we ignore performance state requirements from sub-domains
> > which are powered-off. For this reason _genpd_power_on() also received
> > the additional parameter, depth, which is used for hierarchical locking
> > within genpd.
> >
> > Signed-off-by: Viresh Kumar <viresh.kumar@...aro.org>
> > ---
> > drivers/base/power/domain.c | 113 ++++++++++++++++++++++++++++++------
> > include/linux/pm_domain.h | 4 ++
> > 2 files changed, 98 insertions(+), 19 deletions(-)
> >
> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> > index d6b389a9f678..206e263abe90 100644
> > --- a/drivers/base/power/domain.c
> > +++ b/drivers/base/power/domain.c
> > @@ -239,24 +239,88 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd)
> > static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {}
> > #endif
> >
> > +static int _genpd_reeval_performance_state(struct generic_pm_domain *genpd,
> > + unsigned int state, int depth);
> > +
>
> I don't like forward declarations like these, as in most cases it
> means that the code can be re-worked and improved.
>
> However, because of the recursiveness code in genpd, I understand that
> it may be needed for some cases. Anyway, please have look and see if
> you can rid of it.
Will see if I can do it any better, though I really doubt it :)
> > static int _genpd_set_performance_state(struct generic_pm_domain *genpd,
> > - unsigned int state)
> > + unsigned int state, int depth)
> > -static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
> > +static int
> > +_genpd_power_on(struct generic_pm_domain *genpd, bool timed, int depth)
>
> Cosmetics, but please move the new parameter below instead of moving
> "static int" above.
>
> > {
> > unsigned int state_idx = genpd->state_idx;
> > ktime_t time_start;
> > @@ -368,7 +442,8 @@ static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
> > elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
> >
> > if (unlikely(genpd->set_performance_state)) {
> > - ret = genpd->set_performance_state(genpd, genpd->performance_state);
> > + ret = _genpd_set_performance_state(genpd,
> > + genpd->performance_state, depth);
>
> This is going to be a problem for genpd_sync_power_on() as in some
> cases we can't allow to use locks.
So we pass the use_lock parameter to this routine as well ?
> Moreover I am wondering how this plays with genpd_power_on(), as when
> it calls _genpd_power_on() the first time, that is done by holding the
> top-level master's lock - and all locks in that hierarchy. In other
> words we are always powering on the top most master first, then step
> by step we move down in the hierarchy to power on the genpds.
Sure, but the ordering of locks is always subdomain first and then master.
Considering the case of Qcom, we have two domains Cx (sub-domain) and Mx (master).
On first genpd_power_on(Cx) call, we will first call genpd_power_on(Mx) which
will just power it on as none of its master will have perf-state support. We
then call _genpd_power_on(Cx) which will also not do anything with Mx as its own
(Cx's) pstate would be 0 at that time. But even if it had a valid value, it will
propagate just fine with all proper locking in place.
I will wait for your reply on this particular patch before sending out V3.
Thanks for the reviews Ulf.
--
viresh
Powered by blists - more mailing lists