[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231122140119.472110-1-vincent.guittot@linaro.org>
Date: Wed, 22 Nov 2023 15:01:19 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
rafael@...nel.org, viresh.kumar@...aro.org, qyousef@...alina.io,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Cc: lukasz.luba@....com, Vincent Guittot <vincent.guittot@...aro.org>
Subject: [PATCH] sched/pelt: avoid underestimate of task utilization
It has been reported that thread's util_est can significantly decrease as
a result of sharing the CPU with other threads. The use case can be easily
reproduced with a periodic task TA that runs 1ms and sleeps 100us.
When the task is alone on the CPU, its max utilization and its util_est is
around 888. If another similar task starts to run on the same CPU, TA will
have to share the CPU runtime and its maximum utilization will decrease
around half the CPU capacity (512) then TA's util_est will follow this new
maximum trend which is only the result of sharing the CPU with others
tasks. Such situation can be detected with runnable_avg wich is close or
equal to util_avg when TA is alone but increases above util_avg when TA
shares the CPU with other threads and wait on the runqueue.
Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
---
This patch implements what I mentioned in [1]. I have been able to
reproduce such pattern with rt-app.
[1] https://lore.kernel.org/lkml/CAKfTPtDd-HhF-YiNTtL9i5k0PfJbF819Yxu4YquzfXgwi7voyw@mail.gmail.com/#t
kernel/sched/fair.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 07f555857698..eeb505d28905 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4774,6 +4774,11 @@ static inline unsigned long task_util(struct task_struct *p)
return READ_ONCE(p->se.avg.util_avg);
}
+static inline unsigned long task_runnable(struct task_struct *p)
+{
+ return READ_ONCE(p->se.avg.runnable_avg);
+}
+
static inline unsigned long _task_util_est(struct task_struct *p)
{
struct util_est ue = READ_ONCE(p->se.avg.util_est);
@@ -4892,6 +4897,14 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
if (task_util(p) > arch_scale_cpu_capacity(cpu_of(rq_of(cfs_rq))))
return;
+ /*
+ * To avoid underestimate of task utilization, skip updates of ewma if
+ * we cannot grant that thread got all CPU time it wanted.
+ */
+ if ((ue.enqueued + UTIL_EST_MARGIN) < task_runnable(p))
+ goto done;
+
+
/*
* Update Task's estimated utilization
*
--
2.34.1
Powered by blists - more mailing lists