[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <849d8cd7-ca74-e9ed-851b-01addde05f1b@quicinc.com>
Date: Tue, 13 Feb 2024 11:12:17 +0530
From: Sibi Sankar <quic_sibis@...cinc.com>
To: Cristian Marussi <cristian.marussi@....com>
CC: <sudeep.holla@....com>, <rafael@...nel.org>, <viresh.kumar@...aro.org>,
<morten.rasmussen@....com>, <dietmar.eggemann@....com>,
<lukasz.luba@....com>, <linux-arm-kernel@...ts.infradead.org>,
<linux-pm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<quic_mdtipton@...cinc.com>, <linux-arm-msm@...r.kernel.org>
Subject: Re: [PATCH V2 4/4] cpufreq: scmi: Register for limit change
notifications
On 1/29/24 21:29, Cristian Marussi wrote:
> On Wed, Jan 17, 2024 at 04:11:16PM +0530, Sibi Sankar wrote:
>> Register for limit change notifications if supported with the help of
>> perf_notify_support interface and determine the throttled frequency
>> using the perf_freq_xlate to apply HW pressure.
>>
Christian,
Thanks for taking time to review the series.
>> Signed-off-by: Sibi Sankar <quic_sibis@...cinc.com>
>> ---
>>
>> v2:
>> * Export cpufreq_update_pressure and use it directly [Lukasz]
>>
>> drivers/cpufreq/scmi-cpufreq.c | 42 +++++++++++++++++++++++++++++++++-
>> 1 file changed, 41 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
>> index 4ee23f4ebf4a..e0aa85764451 100644
>> --- a/drivers/cpufreq/scmi-cpufreq.c
>> +++ b/drivers/cpufreq/scmi-cpufreq.c
>> @@ -25,9 +25,13 @@ struct scmi_data {
>> int domain_id;
>> int nr_opp;
>> struct device *cpu_dev;
>> + struct cpufreq_policy *policy;
>> cpumask_var_t opp_shared_cpus;
>> + struct notifier_block limit_notify_nb;
>> };
>>
>> +const struct scmi_handle *handle;
>> +static struct scmi_device *scmi_dev;
>> static struct scmi_protocol_handle *ph;
>> static const struct scmi_perf_proto_ops *perf_ops;
>>
>> @@ -144,6 +148,22 @@ scmi_get_cpu_power(struct device *cpu_dev, unsigned long *power,
>> return 0;
>> }
>>
>> +static int scmi_limit_notify_cb(struct notifier_block *nb, unsigned long event, void *data)
>> +{
>> + unsigned long freq_hz;
>> + struct scmi_perf_limits_report *limit_notify = data;
>> + struct scmi_data *priv = container_of(nb, struct scmi_data, limit_notify_nb);
>> + struct cpufreq_policy *policy = priv->policy;
>> +
>> + if (perf_ops->perf_freq_xlate(ph, priv->domain_id, limit_notify->range_max, &freq_hz))
>> + return NOTIFY_OK;
>> +
>> + policy->max = freq_hz / HZ_PER_KHZ;
>> + cpufreq_update_pressure(policy);
>> +
>> + return NOTIFY_OK;
>> +}
>> +
>> static int scmi_cpufreq_init(struct cpufreq_policy *policy)
>> {
>> int ret, nr_opp, domain;
>> @@ -151,6 +171,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
>> struct device *cpu_dev;
>> struct scmi_data *priv;
>> struct cpufreq_frequency_table *freq_table;
>> + struct scmi_perf_notify_info info = {};
>>
>> cpu_dev = get_cpu_device(policy->cpu);
>> if (!cpu_dev) {
>> @@ -250,6 +271,25 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
>> policy->fast_switch_possible =
>> perf_ops->fast_switch_possible(ph, domain);
>>
>> + ret = perf_ops->perf_notify_support(ph, domain, &info);
>> + if (ret)
>> + dev_warn(cpu_dev, "failed to get supported notifications: %d\n", ret);
>> +
>> + if (info.perf_limit_notify) {
>> + priv->limit_notify_nb.notifier_call = scmi_limit_notify_cb;
>> + ret = handle->notify_ops->devm_event_notifier_register(scmi_dev, SCMI_PROTOCOL_PERF,
>> + SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED,
>> + &domain,
>> + &priv->limit_notify_nb);
>> + if (ret) {
>> + dev_err(cpu_dev, "Error in registering limit change notifier for domain %d\n",
>> + domain);
>> + return ret;
>> + }
>
> Is there a reason to fail completely here if it was not possible to register
> the notifier ? (even though expected to succeed given perf_limit_notify
> was true...)
>
> Maybe a big fat warn that the system perf could be degraded, but
> carrying on ?
>
> Or maybe you have in mind a good reason to fail like you did, so please
> explain in that case in a comment.
ack a warn should suffice here
-Sibi
>
> Thanks,
> Cristian
Powered by blists - more mailing lists