lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFrYKRz=S_9_6RNGWn4p3K1MLP63rRfVDSKvH_o8SjZCeQ@mail.gmail.com>
Date:   Fri, 14 Jan 2022 13:30:06 +0100
From:   Ulf Hansson <ulf.hansson@...aro.org>
To:     Maulik Shah <quic_mkshah@...cinc.com>
Cc:     bjorn.andersson@...aro.org, linux-arm-msm@...r.kernel.org,
        linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
        rafael@...nel.org, daniel.lezcano@...aro.org,
        quic_lsrao@...cinc.com, quic_rjendra@...cinc.com
Subject: Re: [PATCH 06/10] soc: qcom: rpmh-rsc: Attach RSC to cluster PM domain

On Sun, 9 Jan 2022 at 18:26, Maulik Shah <quic_mkshah@...cinc.com> wrote:
>
> From: Lina Iyer <ilina@...eaurora.org>
>
> RSC is part the CPU subsystem and powers off the CPU domains when all
> the CPUs and no RPMH transactions are pending from any of the drivers.
> The RSC needs to flush the 'sleep' and 'wake' votes that are critical
> for saving power when all the CPUs are in idle.
>
> Let's make RSC part of the CPU PM domains, by attaching it to the
> cluster power domain. Registering for PM domain notifications, RSC
> driver can be notified that the last CPU is powering down. When the last
> CPU is powering down the domain, let's flush the 'sleep' and 'wake'
> votes that are stored in the data buffers into the hardware and also
> write next wakeup in CONTROL_TCS.
>
> Signed-off-by: Lina Iyer <ilina@...eaurora.org>
> Signed-off-by: Maulik Shah <quic_mkshah@...cinc.com>
> ---
>  drivers/soc/qcom/rpmh-internal.h |  6 +++-
>  drivers/soc/qcom/rpmh-rsc.c      | 60 ++++++++++++++++++++++++++++++++++++++--
>  2 files changed, 62 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
> index 344ba68..32ac117 100644
> --- a/drivers/soc/qcom/rpmh-internal.h
> +++ b/drivers/soc/qcom/rpmh-internal.h
> @@ -97,7 +97,9 @@ struct rpmh_ctrlr {
>   * @rsc_pm:             CPU PM notifier for controller.
>   *                      Used when solver mode is not present.
>   * @cpus_in_pm:         Number of CPUs not in idle power collapse.
> - *                      Used when solver mode is not present.
> + *                      Used when solver mode and "power-domains" is not present.
> + * @genpd_nb:           PM Domain notifier for cluster genpd notifications.
> + * @genpdb:             PM Domain for cluster genpd.

/s/genpdb/genpd

>   * @tcs:                TCS groups.
>   * @tcs_in_use:         S/W state of the TCS; only set for ACTIVE_ONLY
>   *                      transfers, but might show a sleep/wake TCS in use if
> @@ -117,6 +119,8 @@ struct rsc_drv {
>         int id;
>         int num_tcs;
>         struct notifier_block rsc_pm;
> +       struct notifier_block genpd_nb;
> +       struct generic_pm_domain *genpd;
>         atomic_t cpus_in_pm;
>         struct tcs_group tcs[TCS_TYPE_NR];
>         DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index 01c2f50c..5875ad5 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -14,10 +14,13 @@
>  #include <linux/kernel.h>
>  #include <linux/list.h>
>  #include <linux/module.h>
> +#include <linux/notifier.h>
>  #include <linux/of.h>
>  #include <linux/of_irq.h>
>  #include <linux/of_platform.h>
>  #include <linux/platform_device.h>
> +#include <linux/pm_domain.h>
> +#include <linux/pm_runtime.h>
>  #include <linux/slab.h>
>  #include <linux/spinlock.h>
>  #include <linux/wait.h>
> @@ -834,6 +837,51 @@ static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
>         return ret;
>  }
>
> +/**
> + * rpmh_rsc_pd_callback() - Check if any of the AMCs are busy.
> + * @nfb:    Pointer to the genpd notifier block in struct rsc_drv.
> + * @action: GENPD_NOTIFY_PRE_OFF, GENPD_NOTIFY_OFF, GENPD_NOTIFY_PRE_ON or GENPD_NOTIFY_ON.
> + * @v:      Unused
> + *
> + * This function is given to dev_pm_genpd_add_notifier() so we can be informed
> + * about when cluster-pd is going down. When cluster go down we know no more active
> + * transfers will be started so we write sleep/wake sets. This function gets
> + * called from cpuidle code paths and also at system suspend time.
> + *
> + * If AMCs are not busy then writes cached sleep and wake messages to TCSes.
> + * The firmware then takes care of triggering them when entering deepest low power modes.
> + *
> + * Return:
> + * * NOTIFY_OK          - success
> + * * NOTIFY_BAD         - failure
> + */
> +static int rpmh_rsc_pd_callback(struct notifier_block *nfb,
> +                               unsigned long action, void *v)
> +{
> +       struct rsc_drv *drv = container_of(nfb, struct rsc_drv, genpd_nb);
> +
> +       /* We don't need to lock as domin on/off are serialized */

/s/domin/genpd

> +       if ((action == GENPD_NOTIFY_PRE_OFF) &&
> +           (rpmh_rsc_ctrlr_is_busy(drv) || rpmh_flush(&drv->client)))
> +               return NOTIFY_BAD;
> +
> +       return NOTIFY_OK;
> +}
> +
> +static int rpmh_rsc_pd_attach(struct rsc_drv *drv, struct device *dev)
> +{
> +       int ret;
> +
> +       pm_runtime_enable(dev);
> +       ret = dev_pm_domain_attach(dev, false);

Unless I have missed something, this should not be needed.

This is because it's a regular platform driver and we only have a
single PM domain to attach for the rsc device. In this case, the
platform bus is capable of managing the attach to the genpd. See
platform_probe() in drivers/base/platform.c.

> +       if (ret)
> +               return ret;
 > +
> +       drv->genpd = pd_to_genpd(dev->pm_domain);

I couldn't find where this pointer is being used later in the driver.
In any case, you can probably use dev->pm_domain directly wherever
needed instead.

> +       drv->genpd_nb.notifier_call = rpmh_rsc_pd_callback;
> +       return dev_pm_genpd_add_notifier(dev, &drv->genpd_nb);

You should call pm_runtime_disable() in the error path.

> +}
> +
>  static int rpmh_probe_tcs_config(struct platform_device *pdev,
>                                  struct rsc_drv *drv, void __iomem *base)
>  {
> @@ -963,7 +1011,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
>                 return ret;
>
>         /*
> -        * CPU PM notification are not required for controllers that support
> +        * CPU PM/genpd notification are not required for controllers that support
>          * 'HW solver' mode where they can be in autonomous mode executing low
>          * power mode to power down.
>          */
> @@ -971,8 +1019,14 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
>         solver_config &= DRV_HW_SOLVER_MASK << DRV_HW_SOLVER_SHIFT;
>         solver_config = solver_config >> DRV_HW_SOLVER_SHIFT;
>         if (!solver_config) {
> -               drv->rsc_pm.notifier_call = rpmh_rsc_cpu_pm_callback;
> -               cpu_pm_register_notifier(&drv->rsc_pm);
> +               if (of_find_property(dn, "power-domains", NULL)) {

Rather than parsing the DT, I think it's better to check if
"dev->pm_domain" has been assigned. As I indicated above, the platform
bus manages the attach before the driver's ->probe() callback is
invoked.

> +                       ret = rpmh_rsc_pd_attach(drv, &pdev->dev);
> +                       if (ret)
> +                               return ret;
> +               } else {
> +                       drv->rsc_pm.notifier_call = rpmh_rsc_cpu_pm_callback;
> +                       cpu_pm_register_notifier(&drv->rsc_pm);
> +               }
>         }
>
>         /* Enable the active TCS to send requests immediately */

Beyond this point, you need to call the below to manage the error path
correctly:
dev_pm_genpd_remove_notifier()
pm_runtime_disable()

> --
> 2.7.4
>

Kind regards
Uffe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ