[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220217022851.GB31965@dragon>
Date: Thu, 17 Feb 2022 10:28:53 +0800
From: Shawn Guo <shawn.guo@...aro.org>
To: Marc Zyngier <maz@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Maulik Shah <quic_mkshah@...cinc.com>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Sudeep Holla <sudeep.holla@....com>,
"Rafael J . Wysocki" <rafael@...nel.org>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Rob Herring <robh+dt@...nel.org>, devicetree@...r.kernel.org,
linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 1/3] cpuidle: psci: Call cpu_cluster_pm_enter() on the
last CPU
On Wed, Feb 16, 2022 at 02:39:26PM +0000, Marc Zyngier wrote:
> On 2022-02-16 13:28, Shawn Guo wrote:
> > Make a call to cpu_cluster_pm_enter() on the last CPU going to low power
> > state (and cpu_cluster_pm_exit() on the firt CPU coming back), so that
> > platforms can be notified to set up hardware for getting into the
> > cluster
> > low power state.
> >
> > Signed-off-by: Shawn Guo <shawn.guo@...aro.org>
> > ---
> > drivers/cpuidle/cpuidle-psci.c | 13 +++++++++++++
> > 1 file changed, 13 insertions(+)
> >
> > diff --git a/drivers/cpuidle/cpuidle-psci.c
> > b/drivers/cpuidle/cpuidle-psci.c
> > index b51b5df08450..c748c1a7d7b1 100644
> > --- a/drivers/cpuidle/cpuidle-psci.c
> > +++ b/drivers/cpuidle/cpuidle-psci.c
> > @@ -37,6 +37,7 @@ struct psci_cpuidle_data {
> > static DEFINE_PER_CPU_READ_MOSTLY(struct psci_cpuidle_data,
> > psci_cpuidle_data);
> > static DEFINE_PER_CPU(u32, domain_state);
> > static bool psci_cpuidle_use_cpuhp;
> > +static atomic_t cpus_in_idle;
> >
> > void psci_set_domain_state(u32 state)
> > {
> > @@ -67,6 +68,14 @@ static int __psci_enter_domain_idle_state(struct
> > cpuidle_device *dev,
> > if (ret)
> > return -1;
> >
> > + if (atomic_inc_return(&cpus_in_idle) == num_online_cpus()) {
> > + ret = cpu_cluster_pm_enter();
> > + if (ret) {
> > + ret = -1;
> > + goto dec_atomic;
> > + }
> > + }
> > +
> > /* Do runtime PM to manage a hierarchical CPU toplogy. */
> > rcu_irq_enter_irqson();
> > if (s2idle)
> > @@ -88,6 +97,10 @@ static int __psci_enter_domain_idle_state(struct
> > cpuidle_device *dev,
> > pm_runtime_get_sync(pd_dev);
> > rcu_irq_exit_irqson();
> >
> > + if (atomic_read(&cpus_in_idle) == num_online_cpus())
> > + cpu_cluster_pm_exit();
> > +dec_atomic:
> > + atomic_dec(&cpus_in_idle);
> > cpu_pm_exit();
> >
> > /* Clear the domain state to start fresh when back from idle. */
>
> Is it just me, or does anyone else find it a bit odd that a cpuidle driver
> calls back into the core cpuidle code to generate new events?
It's not uncommon that a platform driver calls some helper functions
provided by core.
> Also, why is this PSCI specific? I would assume that the core cpuidle code
> should be responsible for these transitions, not a random cpuidle driver.
The CPU PM helpers cpu_pm_enter() and cpu_cluster_pm_enter() are provided
by kernel/cpu_pm.c rather than cpuidle core. This PSCI cpuidle driver
already uses cpu_pm_enter(), and my patch is making a call to
cpu_cluster_pm_enter().
Shawn
Powered by blists - more mailing lists