[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <171593317056.10875.16807671929410125258.tip-bot2@tip-bot2>
Date: Fri, 17 May 2024 08:06:10 -0000
From: "tip-bot2 for Dawei Li" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Dawei Li <daweilics@...il.com>, Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Vishal Chourasia <vishalc@...ux.ibm.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/urgent] sched/fair: Fix initial util_avg calculation
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 72bffbf57c5247ac6146d1103ef42e9f8d094bc8
Gitweb: https://git.kernel.org/tip/72bffbf57c5247ac6146d1103ef42e9f8d094bc8
Author: Dawei Li <daweilics@...il.com>
AuthorDate: Thu, 14 Mar 2024 18:59:16 -07:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Fri, 17 May 2024 09:49:44 +02:00
sched/fair: Fix initial util_avg calculation
Change se->load.weight to se_weight(se) in the calculation for the
initial util_avg to avoid unnecessarily inflating the util_avg by 1024
times.
The reason is that se->load.weight has the unit/scale as the scaled-up
load, while cfs_rg->avg.load_avg has the unit/scale as the true task
weight (as mapped directly from the task's nice/priority value). With
CONFIG_32BIT, the scaled-up load is equal to the true task weight. With
CONFIG_64BIT, the scaled-up load is 1024 times the true task weight.
Thus, the current code may inflate the util_avg by 1024 times. The
follow-up capping will not allow the util_avg value to go wild. But the
calculation should have the correct logic.
Signed-off-by: Dawei Li <daweilics@...il.com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>
Reviewed-by: Vishal Chourasia <vishalc@...ux.ibm.com>
Link: https://lore.kernel.org/r/20240315015916.21545-1-daweilics@gmail.com
---
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 146ecf9..9009787 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1031,7 +1031,8 @@ void init_entity_runnable_average(struct sched_entity *se)
* With new tasks being created, their initial util_avgs are extrapolated
* based on the cfs_rq's current util_avg:
*
- * util_avg = cfs_rq->util_avg / (cfs_rq->load_avg + 1) * se.load.weight
+ * util_avg = cfs_rq->avg.util_avg / (cfs_rq->avg.load_avg + 1)
+ * * se_weight(se)
*
* However, in many cases, the above util_avg does not give a desired
* value. Moreover, the sum of the util_avgs may be divergent, such
@@ -1078,7 +1079,7 @@ void post_init_entity_util_avg(struct task_struct *p)
if (cap > 0) {
if (cfs_rq->avg.util_avg != 0) {
- sa->util_avg = cfs_rq->avg.util_avg * se->load.weight;
+ sa->util_avg = cfs_rq->avg.util_avg * se_weight(se);
sa->util_avg /= (cfs_rq->avg.load_avg + 1);
if (sa->util_avg > cap)
Powered by blists - more mailing lists