lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFoh8UV=QC6RhOkc=FSvoeqF_UiWp97h0Qp8dniB=sS+8A@mail.gmail.com>
Date:   Thu, 21 Jul 2022 18:48:10 +0200
From:   Ulf Hansson <ulf.hansson@...aro.org>
To:     Abel Vesa <abel.vesa@...aro.org>
Cc:     "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Kevin Hilman <khilman@...nel.org>,
        Len Brown <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        linux-pm@...r.kernel.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] PM: domains: Reverse the order of performance and enabling ops

On Wed, 20 Jul 2022 at 13:03, Abel Vesa <abel.vesa@...aro.org> wrote:
>
> Rather than enabling and then setting the performance state, which usually
> translates into two different levels (voltages) in order to get to the
> one required by the consumer, we could give a chance to the providers to
> cache the performance state needed by the consumer and then, when powering
> on the power domain, the provider could use the cached level instead.

I don't think it's really clear what you want to do here. Let's see
what the discussion below brings us to, but for the next version
please elaborate a bit more in the commit message.

Although, if I understand correctly (also from our offlist
discussions), you want to make it possible to move from two calls,
into one call into the FW from the genpd provider. So it's basically
an optimization, which to me, certainly sounds worth doing.

Furthermore, to get the complete picture, in the Qcom case, we set a
"default" low performance level from the genpd's ->power_on()
callback, which is needed to enable basic functionality for some
consumers.

The second call that I refer to is made when genpd calls the
->set_performance() callback (from genpd_runtime_suspend()), which is
done by genpd to potentially set a new value for an aggregated
performance state of the PM domain. In case when there actually is a
new performance state set in this path, we end up calling the FW twice
for the Qcom case, where this first one is unnecessary.

Did I get that right?

> Also the drop_performance and power_off have to be reversed so that
> when the last active consumer suspends, the level doesn't actually drop
> until the pd is disabled.

I don't quite get what this part helps with, is it really needed to
improve the behaviour?

>
> For the power domains that do not provide the set_performance, things
> remain unchanged, as does for the power domains that only provide the
> set_performance but do not provide the power_on/off.

Right, good points!

I get back to review the code soon, just wanted to make sure I have
the complete picture first.

Kind regards
Uffe

>
> Signed-off-by: Abel Vesa <abel.vesa@...aro.org>
> ---
>  drivers/base/power/domain.c | 30 +++++++++++++++---------------
>  1 file changed, 15 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 5a2e0232862e..38647c304b73 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -939,8 +939,8 @@ static int genpd_runtime_suspend(struct device *dev)
>                 return 0;
>
>         genpd_lock(genpd);
> -       gpd_data->rpm_pstate = genpd_drop_performance_state(dev);
>         genpd_power_off(genpd, true, 0);
> +       gpd_data->rpm_pstate = genpd_drop_performance_state(dev);
>         genpd_unlock(genpd);
>
>         return 0;
> @@ -978,9 +978,8 @@ static int genpd_runtime_resume(struct device *dev)
>                 goto out;
>
>         genpd_lock(genpd);
> +       genpd_restore_performance_state(dev, gpd_data->rpm_pstate);
>         ret = genpd_power_on(genpd, 0);
> -       if (!ret)
> -               genpd_restore_performance_state(dev, gpd_data->rpm_pstate);
>         genpd_unlock(genpd);
>
>         if (ret)
> @@ -1018,8 +1017,8 @@ static int genpd_runtime_resume(struct device *dev)
>  err_poweroff:
>         if (!pm_runtime_is_irq_safe(dev) || genpd_is_irq_safe(genpd)) {
>                 genpd_lock(genpd);
> -               gpd_data->rpm_pstate = genpd_drop_performance_state(dev);
>                 genpd_power_off(genpd, true, 0);
> +               gpd_data->rpm_pstate = genpd_drop_performance_state(dev);
>                 genpd_unlock(genpd);
>         }
>
> @@ -2747,17 +2746,6 @@ static int __genpd_dev_pm_attach(struct device *dev, struct device *base_dev,
>         dev->pm_domain->detach = genpd_dev_pm_detach;
>         dev->pm_domain->sync = genpd_dev_pm_sync;
>
> -       if (power_on) {
> -               genpd_lock(pd);
> -               ret = genpd_power_on(pd, 0);
> -               genpd_unlock(pd);
> -       }
> -
> -       if (ret) {
> -               genpd_remove_device(pd, dev);
> -               return -EPROBE_DEFER;
> -       }
> -
>         /* Set the default performance state */
>         pstate = of_get_required_opp_performance_state(dev->of_node, index);
>         if (pstate < 0 && pstate != -ENODEV && pstate != -EOPNOTSUPP) {
> @@ -2769,6 +2757,18 @@ static int __genpd_dev_pm_attach(struct device *dev, struct device *base_dev,
>                         goto err;
>                 dev_gpd_data(dev)->default_pstate = pstate;
>         }
> +
> +       if (power_on) {
> +               genpd_lock(pd);
> +               ret = genpd_power_on(pd, 0);
> +               genpd_unlock(pd);
> +       }
> +
> +       if (ret) {
> +               genpd_remove_device(pd, dev);
> +               return -EPROBE_DEFER;
> +       }
> +
>         return 1;
>
>  err:
> --
> 2.34.3
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ