[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <159308600329.16989.15117119529654032761.tip-bot2@tip-bot2>
Date: Thu, 25 Jun 2020 11:53:23 -0000
From: "tip-bot2 for Vincent Guittot" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: kernel test robot <rong.a.chen@...el.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
x86 <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: [tip: sched/urgent] sched/cfs: change initial value of runnable_avg
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 68f7b5cc835de7d5b6c7696533c126018171e793
Gitweb: https://git.kernel.org/tip/68f7b5cc835de7d5b6c7696533c126018171e793
Author: Vincent Guittot <vincent.guittot@...aro.org>
AuthorDate: Wed, 24 Jun 2020 17:44:22 +02:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Thu, 25 Jun 2020 13:45:38 +02:00
sched/cfs: change initial value of runnable_avg
Some performance regression on reaim benchmark have been raised with
commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
The problem comes from the init value of runnable_avg which is initialized
with max value. This can be a problem if the newly forked task is finally
a short task because the group of CPUs is wrongly set to overloaded and
tasks are pulled less agressively.
Set initial value of runnable_avg equals to util_avg to reflect that there
is no waiting time so far.
Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
Reported-by: kernel test robot <rong.a.chen@...el.com>
Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/20200624154422.29166-1-vincent.guittot@linaro.org
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cbcb2f7..658aa7a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -806,7 +806,7 @@ void post_init_entity_util_avg(struct task_struct *p)
}
}
- sa->runnable_avg = cpu_scale;
+ sa->runnable_avg = sa->util_avg;
if (p->sched_class != &fair_sched_class) {
/*
Powered by blists - more mailing lists