[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0j-NKveJEN9fuySrbWDy++rWKTr_QWLY3vcA7Df0e3rGQ@mail.gmail.com>
Date: Thu, 17 Mar 2016 13:54:05 +0100
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Juri Lelli <juri.lelli@....com>
Cc: "Rafael J. Wysocki" <rjw@...ysocki.net>,
Linux PM list <linux-pm@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Steve Muckle <steve.muckle@...aro.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Michael Turquette <mturquette@...libre.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH v5 7/7][Update] cpufreq: schedutil: New governor based on
scheduler utilization data
On Thu, Mar 17, 2016 at 12:30 PM, Juri Lelli <juri.lelli@....com> wrote:
> Hi Rafael,
>
> On 17/03/16 01:01, Rafael J. Wysocki wrote:
>> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
>
> [...]
>
>> +static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
>> + unsigned int next_freq)
>> +{
>> + struct cpufreq_policy *policy = sg_policy->policy;
>> +
>> + sg_policy->last_freq_update_time = time;
>> +
>> + if (policy->fast_switch_enabled) {
>> + if (next_freq > policy->max)
>> + next_freq = policy->max;
>> + else if (next_freq < policy->min)
>> + next_freq = policy->min;
>> +
>> + if (sg_policy->next_freq == next_freq) {
>> + trace_cpu_frequency(policy->cur, smp_processor_id());
>> + return;
>> + }
>> + sg_policy->next_freq = next_freq;
>> + next_freq = cpufreq_driver_fast_switch(policy, next_freq);
>> + if (next_freq == CPUFREQ_ENTRY_INVALID)
>> + return;
>> +
>> + policy->cur = next_freq;
>> + trace_cpu_frequency(next_freq, smp_processor_id());
>> + } else if (sg_policy->next_freq != next_freq) {
>> + sg_policy->work_cpu = smp_processor_id();
>
> + sg_policy->next_freq = next_freq;
>
Doh.
>> + irq_work_queue(&sg_policy->irq_work);
>> + }
>> +}
>
> Or we remain at max_f :-).
Sure, thanks!
Will fix.
Powered by blists - more mailing lists