[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <072d862a-7b9c-3d17-d0cc-a3082b826cf9@arm.com>
Date: Mon, 18 Jun 2018 11:00:40 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Vincent Guittot <vincent.guittot@...aro.org>, peterz@...radead.org,
mingo@...nel.org, linux-kernel@...r.kernel.org
Cc: rjw@...ysocki.net, juri.lelli@...hat.com, Morten.Rasmussen@....com,
viresh.kumar@...aro.org, valentin.schneider@....com,
patrick.bellasi@....com, joel@...lfernandes.org,
daniel.lezcano@...aro.org, quentin.perret@....com,
Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH v6 04/11] cpufreq/schedutil: use rt utilization tracking
On 06/08/2018 02:09 PM, Vincent Guittot wrote:
> Take into account rt utilization when selecting an OPP for cfs tasks in order
> to reflect the utilization of the CPU.
The rt utilization signal is only tracked per-cpu, not per-entity. So it
is not aware of PELT migrations (attach/detach).
IMHO, this patch deserves some explanation why the temporary
inflation/deflation of the OPP driving utilization signal in case an
rt-task migrates off/on (missing detach/attach for rt-signal) doesn't
harm performance or energy consumption.
There was some talk (mainly on #sched irc) about ... (1) preempted cfs
tasks (with reduced demand (utilization id only running) signals) using
this remaining rt utilization of an rt task which migrated off and ...
(2) going to max when an rt tasks runs ... but a summary of all of that
in this patch would really help to understand.
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> ---
> kernel/sched/cpufreq_schedutil.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 28592b6..32f97fb 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -56,6 +56,7 @@ struct sugov_cpu {
> /* The fields below are only needed when sharing a policy: */
> unsigned long util_cfs;
> unsigned long util_dl;
> + unsigned long util_rt;
> unsigned long max;
>
> /* The field below is for single-CPU policies only: */
> @@ -178,15 +179,21 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
> sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
> sg_cpu->util_cfs = cpu_util_cfs(rq);
> sg_cpu->util_dl = cpu_util_dl(rq);
> + sg_cpu->util_rt = cpu_util_rt(rq);
> }
>
> static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
> {
> struct rq *rq = cpu_rq(sg_cpu->cpu);
> + unsigned long util;
>
> if (rq->rt.rt_nr_running)
> return sg_cpu->max;
>
> + util = sg_cpu->util_dl;
> + util += sg_cpu->util_cfs;
> + util += sg_cpu->util_rt;
> +
> /*
> * Utilization required by DEADLINE must always be granted while, for
> * FAIR, we use blocked utilization of IDLE CPUs as a mechanism to
> @@ -197,7 +204,7 @@ static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
> * util_cfs + util_dl as requested freq. However, cpufreq is not yet
> * ready for such an interface. So, we only do the latter for now.
> */
> - return min(sg_cpu->max, (sg_cpu->util_dl + sg_cpu->util_cfs));
> + return min(sg_cpu->max, util);
> }
>
> static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
>
Powered by blists - more mailing lists