[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFqsvF=Pm-vMXSUwPMPnjCr7nSYuy5AH+8rwLYm_NUPKww@mail.gmail.com>
Date: Fri, 4 Feb 2022 10:43:54 +0100
From: Ulf Hansson <ulf.hansson@...aro.org>
To: Dmitry Osipenko <digetx@...il.com>
Cc: "Rafael J . Wysocki" <rafael@...nel.org>, linux-pm@...r.kernel.org,
Kevin Hilman <khilman@...nel.org>,
Alexandre Torgue <alexandre.torgue@...s.st.com>,
Geert Uytterhoeven <geert+renesas@...der.be>,
Rajendra Nayak <rnayak@...eaurora.org>,
Dong Aisheng <aisheng.dong@....com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] PM: domains: Prevent power off for parent unless child is
in deepest state
On Mon, 31 Jan 2022 at 19:29, Dmitry Osipenko <digetx@...il.com> wrote:
>
> 31.01.2022 14:37, Ulf Hansson пишет:
> > A PM domain managed by genpd may support multiple idlestates. During
> > genpd_power_off() a genpd governor may be asked to select one of the
> > idlestates based upon the dev PM QoS constraints, for example.
> >
> > However, there is a problem with the behaviour around this in genpd. More
> > precisely, a parent-domain is allowed to be powered off, no matter of what
> > idlestate that has been selected for the child-domain.
> >
> > So far, we have not received any reports about errors, possibly because
> > there might not be platform with this hierarchical configuration, yet.
> > Nevertheless, it seems reasonable to change the behaviour into preventing
> > the parent-domain from being powered off, unless the deepest idlestate has
> > been selected for the child-domain, so let's do that.
> >
> > Signed-off-by: Ulf Hansson <ulf.hansson@...aro.org>
> > ---
> > drivers/base/power/domain.c | 18 ++++++++++++++++++
> > 1 file changed, 18 insertions(+)
> >
> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> > index 5db704f02e71..7f97c5cabdc2 100644
> > --- a/drivers/base/power/domain.c
> > +++ b/drivers/base/power/domain.c
> > @@ -636,6 +636,17 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on,
> > atomic_read(&genpd->sd_count) > 0)
> > return -EBUSY;
> >
> > + /*
> > + * The children must be in their deepest states to allow the parent to
> > + * be powered off. Note that, there's no need for additional locking, as
> > + * powering on a child, requires the parent's lock to be acquired first.
> > + */
> > + list_for_each_entry(link, &genpd->parent_links, parent_node) {
> > + struct generic_pm_domain *child = link->child;
> > + if (child->state_idx < child->state_count - 1)
> > + return -EBUSY;
> > + }
> > +
> > list_for_each_entry(pdd, &genpd->dev_list, list_node) {
> > enum pm_qos_flags_status stat;
> >
> > @@ -1073,6 +1084,13 @@ static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock,
> > || atomic_read(&genpd->sd_count) > 0)
> > return;
> >
> > + /* Check that the children are in their deepest state. */
> > + list_for_each_entry(link, &genpd->parent_links, parent_node) {
> > + struct generic_pm_domain *child = link->child;
> > + if (child->state_idx < child->state_count - 1)
> > + return;
> > + }
> > +
> > /* Choose the deepest state when suspending */
> > genpd->state_idx = genpd->state_count - 1;
> > if (_genpd_power_off(genpd, false))
>
> Hello Ulf,
Hi Dmitry,
>
> Is this needed by a concrete SoC? It needs to be clarified in the commit
> message, otherwise looks like this patch wasn't tested and it's unclear
> whether this change is really needed.
It's needed on a STMicro SoC that I have been working on. However,
it's difficult for me to test on that platform, as some SoC specific
pieces are missing upstream (the power domain deployment in
particular). Anyway, let me add some information about this in the
commit log for the next version.
When it comes to testing, I am using a couple of local test dummy
drivers. One that manages devices that gets attached to a genpd,
mostly to execute runtime PM and dev PM QoS calls - and another that
manages the PM domains with genpd. I have been thinking of a way to
share these "tools" to let other people use them for testing too, but
I haven't just got to it yet.
Besides the above, do you see any issues from Nvidia platforms point
of view with $subject patch?
Kind regards
Uffe
Powered by blists - more mailing lists