[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YRPhI3AOAJLpQnjT@google.com>
Date: Wed, 11 Aug 2021 15:39:31 +0100
From: Quentin Perret <qperret@...gle.com>
To: Lukasz Luba <lukasz.luba@....com>
Cc: Viresh Kumar <viresh.kumar@...aro.org>,
Rafael Wysocki <rjw@...ysocki.net>,
Sudeep Holla <sudeep.holla@....com>,
Cristian Marussi <cristian.marussi@....com>,
linux-pm@...r.kernel.org,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2 9/9] cpufreq: scmi: Use .register_em() callback
On Wednesday 11 Aug 2021 at 15:09:13 (+0100), Lukasz Luba wrote:
>
>
> On 8/11/21 2:17 PM, Quentin Perret wrote:
> > On Wednesday 11 Aug 2021 at 17:28:47 (+0530), Viresh Kumar wrote:
> > > Set the newly added .register_em() callback to register with the EM
> > > after the cpufreq policy is properly initialized.
> > >
> > > Signed-off-by: Viresh Kumar <viresh.kumar@...aro.org>
> > > ---
> > > drivers/cpufreq/scmi-cpufreq.c | 55 ++++++++++++++++++++--------------
> > > 1 file changed, 32 insertions(+), 23 deletions(-)
> > >
> > > diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
> > > index 75f818d04b48..b916c9e22921 100644
> > > --- a/drivers/cpufreq/scmi-cpufreq.c
> > > +++ b/drivers/cpufreq/scmi-cpufreq.c
> > > @@ -22,7 +22,9 @@
> > > struct scmi_data {
> > > int domain_id;
> > > + int nr_opp;
> > > struct device *cpu_dev;
> > > + cpumask_var_t opp_shared_cpus;
> >
> > Can we use policy->related_cpus and friends directly in the callback
>
> Unfortunately not. This tricky setup code was introduced because we may
> have a platform with per-CPU policy, so single bit set in
> policy->related_cpus, but we want EAS to be still working on set
> of CPUs. That's why we construct temporary cpumask and pass it to EM.
Aha, I see this now. Hmm, those platforms better have AMUs then,
otherwise PELT signals will be wonky ...
I was going to suggest using dev_pm_opp_get_sharing_cpus() from the
callback instead, but maybe that's overkill as we'd need to allocate a
temporary cpumask and all. So n/m this patch should be fine as is.
Powered by blists - more mailing lists