[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <158991385628.17951.11829788536387109935.tip-bot2@tip-bot2>
Date: Tue, 19 May 2020 18:44:16 -0000
From: "tip-bot2 for Vincent Guittot" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Phil Auld <pauld@...hat.com>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [tip: sched/core] sched/fair: Optimize enqueue_task_fair()
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 7d148be69e3a0eaa9d029a3c51b545e322116a99
Gitweb: https://git.kernel.org/tip/7d148be69e3a0eaa9d029a3c51b545e322116a99
Author: Vincent Guittot <vincent.guittot@...aro.org>
AuthorDate: Wed, 13 May 2020 15:55:02 +02:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Tue, 19 May 2020 20:34:13 +02:00
sched/fair: Optimize enqueue_task_fair()
enqueue_task_fair jumps to enqueue_throttle label when cfs_rq_of(se) is
throttled which means that se can't be NULL in such case and we can move
the label after the if (!se) statement. Futhermore, the latter can be
removed because se is always NULL when reaching this point.
Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Phil Auld <pauld@...hat.com>
Link: https://lkml.kernel.org/r/20200513135502.4672-1-vincent.guittot@linaro.org
---
kernel/sched/fair.c | 39 +++++++++++++++++++--------------------
1 file changed, 19 insertions(+), 20 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9a58874..4e58686 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5512,28 +5512,27 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
list_add_leaf_cfs_rq(cfs_rq);
}
-enqueue_throttle:
- if (!se) {
- add_nr_running(rq, 1);
- /*
- * Since new tasks are assigned an initial util_avg equal to
- * half of the spare capacity of their CPU, tiny tasks have the
- * ability to cross the overutilized threshold, which will
- * result in the load balancer ruining all the task placement
- * done by EAS. As a way to mitigate that effect, do not account
- * for the first enqueue operation of new tasks during the
- * overutilized flag detection.
- *
- * A better way of solving this problem would be to wait for
- * the PELT signals of tasks to converge before taking them
- * into account, but that is not straightforward to implement,
- * and the following generally works well enough in practice.
- */
- if (flags & ENQUEUE_WAKEUP)
- update_overutilized_status(rq);
+ /* At this point se is NULL and we are at root level*/
+ add_nr_running(rq, 1);
- }
+ /*
+ * Since new tasks are assigned an initial util_avg equal to
+ * half of the spare capacity of their CPU, tiny tasks have the
+ * ability to cross the overutilized threshold, which will
+ * result in the load balancer ruining all the task placement
+ * done by EAS. As a way to mitigate that effect, do not account
+ * for the first enqueue operation of new tasks during the
+ * overutilized flag detection.
+ *
+ * A better way of solving this problem would be to wait for
+ * the PELT signals of tasks to converge before taking them
+ * into account, but that is not straightforward to implement,
+ * and the following generally works well enough in practice.
+ */
+ if (flags & ENQUEUE_WAKEUP)
+ update_overutilized_status(rq);
+enqueue_throttle:
if (cfs_bandwidth_used()) {
/*
* When bandwidth control is enabled; the cfs_rq_throttled()
Powered by blists - more mailing lists