[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFpPa1u1ouoa2zFzMe2McqEpxu7Vex2zd1nYoQUQtY2s-A@mail.gmail.com>
Date: Wed, 19 Dec 2018 11:02:05 +0100
From: Ulf Hansson <ulf.hansson@...aro.org>
To: Daniel Lezcano <daniel.lezcano@...aro.org>
Cc: "Rafael J . Wysocki" <rjw@...ysocki.net>,
Sudeep Holla <sudeep.holla@....com>,
Lorenzo Pieralisi <Lorenzo.Pieralisi@....com>,
Mark Rutland <mark.rutland@....com>,
Linux PM <linux-pm@...r.kernel.org>,
"Raju P . L . S . S . S . N" <rplsssn@...eaurora.org>,
Stephen Boyd <sboyd@...nel.org>,
Tony Lindgren <tony@...mide.com>,
Kevin Hilman <khilman@...nel.org>,
Lina Iyer <ilina@...eaurora.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Geert Uytterhoeven <geert+renesas@...der.be>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v10 02/27] PM / Domains: Add support for CPU devices to genpd
On Wed, 19 Dec 2018 at 10:53, Daniel Lezcano <daniel.lezcano@...aro.org> wrote:
>
> On 29/11/2018 18:46, Ulf Hansson wrote:
> > To enable a device belonging to a CPU to be attached to a PM domain managed
> > by genpd, let's do a few changes to it, as to make it convenient to manage
> > the specifics around CPUs.
> >
> > To be able to quickly find out what CPUs that are attached to a genpd,
> > which typically becomes useful from a genpd governor as following changes
> > is about to show, let's add a cpumask to the struct generic_pm_domain. At
> > the point when a CPU device gets attached to a genpd, let's update its
> > cpumask. Moreover, let's also propagate changes to the cpumask upwards in
> > the topology to the master PM domains. In this way, the cpumask for a genpd
> > hierarchically reflects all CPUs attached to the topology below it.
> >
> > Finally, let's make this an opt-in feature, to avoid having to manage CPUs
> > and the cpumask for a genpd that doesn't need it. For that reason, let's
> > add a new genpd configuration bit, GENPD_FLAG_CPU_DOMAIN.
> >
> > Cc: Lina Iyer <ilina@...eaurora.org>
> > Co-developed-by: Lina Iyer <lina.iyer@...aro.org>
> > Signed-off-by: Ulf Hansson <ulf.hansson@...aro.org>
> > ---
> >
> > Changes in v10:
> > - Don't allocate the cpumask when not used.
> > - Simplify the code that updates the cpumask.
> > - Document the GENPD_FLAG_CPU_DOMAIN.
> >
> > ---
> > drivers/base/power/domain.c | 66 ++++++++++++++++++++++++++++++++++++-
> > include/linux/pm_domain.h | 13 ++++++++
> > 2 files changed, 78 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> > index e27b91d36a2a..c3ff8e395308 100644
> > --- a/drivers/base/power/domain.c
> > +++ b/drivers/base/power/domain.c
> > @@ -20,6 +20,7 @@
> > #include <linux/sched.h>
> > #include <linux/suspend.h>
> > #include <linux/export.h>
> > +#include <linux/cpu.h>
> >
> > #include "power.h"
> >
> > @@ -126,6 +127,7 @@ static const struct genpd_lock_ops genpd_spin_ops = {
> > #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE)
> > #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON)
> > #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP)
> > +#define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN)
> >
> > static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
> > const struct generic_pm_domain *genpd)
> > @@ -1377,6 +1379,56 @@ static void genpd_free_dev_data(struct device *dev,
> > dev_pm_put_subsys_data(dev);
> > }
> >
> > +static void __genpd_update_cpumask(struct generic_pm_domain *genpd,
> > + int cpu, bool set, unsigned int depth)
> > +{
> > + struct gpd_link *link;
> > +
> > + if (!genpd_is_cpu_domain(genpd))
> > + return;
>
> With this test, we won't continue updating the cpumask for the other
> masters. Is it done on purpose ?
Correct, and yes it's on purpose.
We are not even allocating the cpumask for the genpd in question,
unless it has the GENPD_FLAG_CPU_DOMAIN set.
[...]
Kind regards
Uffe
Powered by blists - more mailing lists