[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFoafAk72Kw6X7626Niduaii0V5VM4dGSWmq+e3JTh7VRg@mail.gmail.com>
Date: Wed, 4 Aug 2021 11:59:33 +0200
From: Ulf Hansson <ulf.hansson@...aro.org>
To: Dmitry Osipenko <digetx@...il.com>
Cc: Thierry Reding <thierry.reding@...il.com>,
Jonathan Hunter <jonathanh@...dia.com>,
Viresh Kumar <vireshk@...nel.org>,
Stephen Boyd <sboyd@...nel.org>,
Peter De Schrijver <pdeschrijver@...dia.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-tegra <linux-tegra@...r.kernel.org>,
Linux PM <linux-pm@...r.kernel.org>
Subject: Re: [PATCH v7 02/37] soc/tegra: pmc: Implement attach_dev() of power
domain drivers
On Mon, 2 Aug 2021 at 20:23, Dmitry Osipenko <digetx@...il.com> wrote:
>
> 02.08.2021 17:48, Ulf Hansson пишет:
> ...
> >> + if (!list_empty(&genpd->child_links)) {
> >> + link = list_first_entry(&genpd->child_links, struct gpd_link,
> >> + child_node);
> >> + core_genpd = link->parent;
> >> + } else {
> >> + core_genpd = genpd;
> >> + }
> >
> > This looks a bit odd to me. A genpd provider shouldn't need to walk
> > these links as these are considered internals to genpd. Normally this
> > needs lockings, etc.
> >
> > Why exactly do you need this?
>
> We have a chain of PMC domain -> core domain, both domains are created
> and liked together by this PMC driver. Devices are attached to either
> PMC domain or to core domain. PMC domain doesn't handle the performance
> changes, performance requests go down to the core domain.
Did I get this right? The core domain is the parent to the PMC domain?
>
> This is needed in order to translate the device's OPP into performance
> state of the core domain, based on the domain to which device is attached.
So, the PMC domain doesn't have an OPP table associated with it, but
some of its attached devices may still have available OPPs, which
should be managed through the parent domain (core domain). Correct?
Is there a DT patch in the series that I can look at that shows how
this is encoded?
Hmm, I have the feeling that we should try to manage in some generic
way in genpd, rather than having to deal with it here.
>
> >> +
> >> + pd_opp_table = dev_pm_opp_get_opp_table(&core_genpd->dev);
> >> + if (IS_ERR(pd_opp_table)) {
> >> + dev_err(&genpd->dev, "failed to get OPP table of %s: %pe\n",
> >> + dev_name(&core_genpd->dev), pd_opp_table);
> >> + ret = PTR_ERR(pd_opp_table);
> >> + goto put_dev_opp;
> >> + }
> >> +
> >> + pd_opp = dev_pm_opp_xlate_required_opp(opp_table, pd_opp_table, opp);
> >> + if (IS_ERR(pd_opp)) {
> >> + dev_err(&genpd->dev,
> >> + "failed to xlate required OPP for %luHz of %s: %pe\n",
> >> + rate, dev_name(dev), pd_opp);
> >> + ret = PTR_ERR(pd_opp);
> >> + goto put_pd_opp_table;
> >> + }
> >> +
> >> + /*
> >> + * The initialized state will be applied by GENPD core on the first
> >> + * RPM-resume of the device. This means that drivers don't need to
> >> + * explicitly initialize performance state.
> >> + */
> >> + state = pm_genpd_opp_to_performance_state(&core_genpd->dev, pd_opp);
> >> + gpd_data->rpm_pstate = state;
> >
> > Could the above be replaced with Rajendra's suggestion [1], which
> > changes genpd to internally during attach, to set a default
> > performance state when there is a "required-opp" specified in the
> > device node?
>
> It's not a "static" performance level here, but any level depending on
> h/w state left from bootloader and etc. The performance level
> corresponds to the voltage of the core domain, hence we need to
> initialize the voltage vote before device is resumed.
Why not let the driver deal with this instead? It should be able to
probe its device, no matter what state the bootloader has put the
device into.
To me, it sounds like a call to dev_pm_genpd_set_performance_state()
(perhaps via dev_pm_opp_set_opp() or dev_pm_opp_set_rate()) from the
driver itself, should be sufficient?
I understand that it means the domain may change the OPP during boot,
without respecting a vote for a device that has not been probed yet.
But is there a problem with this?
Kind regards
Uffe
Powered by blists - more mailing lists