[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1495616452-7582-3-git-send-email-vincent.guittot@linaro.org>
Date: Wed, 24 May 2017 11:00:52 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: peterz@...radead.org, mingo@...nel.org,
linux-kernel@...r.kernel.org
Cc: rjw@...ysocki.net, juri.lelli@....com, dietmar.eggemann@....com,
Morten.Rasmussen@....com,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: [PATCH 2/2] cpufreq/schedutil: add rt utilization tracking
Add both cfs_rq and rt_rq's utilization when selecting an OPP for cfs task
as rt task can preempt and steal cfs's running time.
This prevent frequency drops when rt tasks steal running time to cfs tasks
which appear lower than they are.
Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
---
kernel/sched/cpufreq_schedutil.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 622eed1..bc292b92 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -164,7 +164,7 @@ static void sugov_get_util(unsigned long *util, unsigned long *max)
cfs_max = arch_scale_cpu_capacity(NULL, smp_processor_id());
- *util = min(rq->cfs.avg.util_avg, cfs_max);
+ *util = min(rq->cfs.avg.util_avg + rq->rt.avg.util_avg, cfs_max);
*max = cfs_max;
}
--
2.7.4
Powered by blists - more mailing lists