[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0in0N6USFqvuLcccpirahj=oOki67+Lq9_5i_q7Ep7T6g@mail.gmail.com>
Date: Fri, 16 Oct 2020 19:01:42 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Wei Wang <wvw@...gle.com>
Cc: wei.vince.wang@...il.com, Viresh Kumar <viresh.kumar@...aro.org>,
Quentin Perret <qperret@...gle.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Linux PM <linux-pm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched: cpufreq_schedutil: maintain raw cache when next_f
is not changed
On Fri, Oct 16, 2020 at 6:36 PM Wei Wang <wvw@...gle.com> wrote:
>
> Currently, raw cache will be reset when next_f is changed after
> get_next_freq for correctness. However, it may introduce more
> cycles. This patch changes it to maintain the cached value instead of
> dropping it.
IMV you need to be more specific about why this helps.
> This is adapted from https://android-review.googlesource.com/1352810/
>
> Signed-off-by: Wei Wang <wvw@...gle.com>
> ---
> kernel/sched/cpufreq_schedutil.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 5ae7b4e6e8d6..ae3ae7fcd027 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -31,6 +31,7 @@ struct sugov_policy {
> s64 freq_update_delay_ns;
> unsigned int next_freq;
> unsigned int cached_raw_freq;
> + unsigned int prev_cached_raw_freq;
>
> /* The next fields are only needed if fast switch cannot be used: */
> struct irq_work irq_work;
> @@ -165,6 +166,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
> return sg_policy->next_freq;
>
> sg_policy->need_freq_update = false;
> + sg_policy->prev_cached_raw_freq = sg_policy->cached_raw_freq;
> sg_policy->cached_raw_freq = freq;
> return cpufreq_driver_resolve_freq(policy, freq);
> }
> @@ -464,8 +466,8 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
> if (busy && next_f < sg_policy->next_freq) {
> next_f = sg_policy->next_freq;
>
> - /* Reset cached freq as next_freq has changed */
> - sg_policy->cached_raw_freq = 0;
> + /* Restore cached freq as next_freq has changed */
> + sg_policy->cached_raw_freq = sg_policy->prev_cached_raw_freq;
> }
>
> /*
> @@ -828,6 +830,7 @@ static int sugov_start(struct cpufreq_policy *policy)
> sg_policy->limits_changed = false;
> sg_policy->need_freq_update = false;
> sg_policy->cached_raw_freq = 0;
> + sg_policy->prev_cached_raw_freq = 0;
>
> for_each_cpu(cpu, policy->cpus) {
> struct sugov_cpu *sg_cpu = &per_cpu(sugov_cpu, cpu);
> --
> 2.29.0.rc1.297.gfa9743e501-goog
>
Powered by blists - more mailing lists