[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18a910f3-261e-18fb-931e-e024b2a20f0b@arm.com>
Date: Thu, 17 Oct 2019 12:19:55 +0100
From: Douglas Raillard <douglas.raillard@....com>
To: Dietmar Eggemann <dietmar.eggemann@....com>,
linux-kernel@...r.kernel.org
Cc: linux-pm@...r.kernel.org, mingo@...hat.com, peterz@...radead.org,
rjw@...ysocki.net, viresh.kumar@...aro.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, qperret@...rret.net,
patrick.bellasi@...bug.net, dh.han@...sung.com
Subject: Re: [RFC PATCH v3 4/6] sched/cpufreq: Introduce sugov_cpu_ramp_boost
On 10/17/19 9:57 AM, Dietmar Eggemann wrote:
> On 11/10/2019 15:44, Douglas RAILLARD wrote:
>
> [...]
>
>> @@ -181,6 +185,42 @@ static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
>> }
>> }
>>
>> +static unsigned long sugov_cpu_ramp_boost(struct sugov_cpu *sg_cpu)
>> +{
>> + return READ_ONCE(sg_cpu->ramp_boost);
>> +}
>> +
>> +static unsigned long sugov_cpu_ramp_boost_update(struct sugov_cpu *sg_cpu)
>> +{
>> + struct rq *rq = cpu_rq(sg_cpu->cpu);
>> + unsigned long util_est_enqueued;
>> + unsigned long util_avg;
>> + unsigned long boost = 0;
>> +
>> + util_est_enqueued = READ_ONCE(rq->cfs.avg.util_est.enqueued);
>> + util_avg = READ_ONCE(rq->cfs.avg.util_avg);
>> +
>> + /*
>> + * Boost when util_avg becomes higher than the previous stable
>> + * knowledge of the enqueued tasks' set util, which is CPU's
>> + * util_est_enqueued.
>> + *
>> + * We try to spot changes in the workload itself, so we want to
>> + * avoid the noise of tasks being enqueued/dequeued. To do that,
>> + * we only trigger boosting when the "amount of work' enqueued
>
> s/"amount of work'/"amount of work" or 'amount of work'
>
> [...]
>
>> @@ -552,6 +593,8 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
>> unsigned long j_util, j_max;
>>
>> j_util = sugov_get_util(j_sg_cpu);
>> + if (j_sg_cpu == sg_cpu)
>> + sugov_cpu_ramp_boost_update(sg_cpu);
>
> Can you not call this already in sugov_update_shared(), like in the
> sugov_update_single() case?
The next commit in the series needs to aggregate the ramp_boost of all CPUs in the policy,
so this call will end up here anyway, unless we want to set the value at previous level and
query it back again in the loop. I don't mind either way, but since no option seem
faster than the other, I went for clustering the ramp boost code rather than spreading it at
all levels.
> diff --git a/kernel/sched/cpufreq_schedutil.c
> b/kernel/sched/cpufreq_schedutil.c
> index e35c20b42780..4c53f63a537d 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -595,8 +595,6 @@ static unsigned int sugov_next_freq_shared(struct
> sugov_cpu *sg_cpu, u64 time)
> unsigned long j_util, j_max;
>
> j_util = sugov_get_util(j_sg_cpu);
> - if (j_sg_cpu == sg_cpu)
> - sugov_cpu_ramp_boost_update(sg_cpu);
> j_max = j_sg_cpu->max;
> j_util = sugov_iowait_apply(j_sg_cpu, time, j_util, j_max);
>
> @@ -625,6 +623,7 @@ sugov_update_shared(struct update_util_data *hook,
> u64 time, unsigned int flags)
> ignore_dl_rate_limit(sg_cpu, sg_policy);
>
> if (sugov_should_update_freq(sg_policy, time)) {
> + sugov_cpu_ramp_boost_update(sg_cpu);
> next_f = sugov_next_freq_shared(sg_cpu, time);
>
> [...]
>
Powered by blists - more mailing lists