[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1462843489.4224.17.camel@linux.intel.com>
Date: Mon, 09 May 2016 18:24:49 -0700
From: Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
To: "Rafael J. Wysocki" <rjw@...ysocki.net>,
Linux PM list <linux-pm@...r.kernel.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] intel_pstate: Clean up
get_target_pstate_use_performance()
On Sat, 2016-05-07 at 01:47 +0200, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
>
> The way the code in get_target_pstate_use_performance() is arranged
> and the comments in there are totally confusing, so modify them to
> reflect what's going on.
>
> The results of the computations should be the same as before.
>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
Acked-by: Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
> ---
> drivers/cpufreq/intel_pstate.c | 32 +++++++++++++-----------------
> --
> 1 file changed, 13 insertions(+), 19 deletions(-)
>
> Index: linux-pm/drivers/cpufreq/intel_pstate.c
> ===================================================================
> --- linux-pm.orig/drivers/cpufreq/intel_pstate.c
> +++ linux-pm/drivers/cpufreq/intel_pstate.c
> @@ -1241,43 +1241,37 @@ static inline int32_t get_target_pstate_
>
> static inline int32_t get_target_pstate_use_performance(struct
> cpudata *cpu)
> {
> - int32_t core_busy, max_pstate, current_pstate, sample_ratio;
> + int32_t perf_scaled, sample_ratio;
> u64 duration_ns;
>
> /*
> - * core_busy is the ratio of actual performance to max
> - * max_pstate is the max non turbo pstate available
> - * current_pstate was the pstate that was requested during
> - * the last sample period.
> - *
> - * We normalize core_busy, which was our actual percent
> - * performance to what we requested during the last sample
> - * period. The result will be a percentage of busy at a
> - * specified pstate.
> + * perf_scaled is the average performance during the last
> sampling
> + * period (in percent) scaled by the ratio of the P-state
> requested
> + * last time to the maximum P-state. That measures the
> system's
> + * response to the previous P-state selection.
> */
> - core_busy = 100 * cpu->sample.core_avg_perf;
> - max_pstate = cpu->pstate.max_pstate_physical;
> - current_pstate = cpu->pstate.current_pstate;
> - core_busy = mul_fp(core_busy, div_fp(max_pstate,
> current_pstate));
> + perf_scaled = div_fp(cpu->pstate.max_pstate_physical,
> + cpu->pstate.current_pstate);
> + perf_scaled = mul_fp(perf_scaled, 100 * cpu-
> >sample.core_avg_perf);
>
> /*
> * Since our utilization update callback will not run unless
> we are
> * in C0, check if the actual elapsed time is significantly
> greater (3x)
> * than our sample interval. If it is, then we were idle
> for a long
> - * enough period of time to adjust our busyness.
> + * enough period of time to adjust our performance metric.
> */
> duration_ns = cpu->sample.time - cpu->last_sample_time;
> if ((s64)duration_ns > pid_params.sample_rate_ns * 3) {
> sample_ratio = div_fp(pid_params.sample_rate_ns,
> duration_ns);
> - core_busy = mul_fp(core_busy, sample_ratio);
> + perf_scaled = mul_fp(perf_scaled, sample_ratio);
> } else {
> sample_ratio = div_fp(100 * cpu->sample.mperf, cpu-
> >sample.tsc);
> if (sample_ratio < int_tofp(1))
> - core_busy = 0;
> + perf_scaled = 0;
> }
>
> - cpu->sample.busy_scaled = core_busy;
> - return cpu->pstate.current_pstate - pid_calc(&cpu->pid,
> core_busy);
> + cpu->sample.busy_scaled = perf_scaled;
> + return cpu->pstate.current_pstate - pid_calc(&cpu->pid,
> perf_scaled);
> }
>
> static inline void intel_pstate_update_pstate(struct cpudata *cpu,
> int pstate)
>
Powered by blists - more mailing lists