[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190722173348.9241-14-riel@surriel.com>
Date: Mon, 22 Jul 2019 13:33:47 -0400
From: Rik van Riel <riel@...riel.com>
To: linux-kernel@...r.kernel.org
Cc: kernel-team@...com, pjt@...gle.com, dietmar.eggemann@....com,
peterz@...radead.org, mingo@...hat.com, morten.rasmussen@....com,
tglx@...utronix.de, mgorman@...hsingularity.net,
vincent.guittot@...aro.org, Rik van Riel <riel@...riel.com>
Subject: [PATCH 13/14] sched,fair: flatten update_curr functionality
Make it clear that update_curr only works on tasks any more.
There is no need for task_tick_fair to call it on every sched entity up
the hierarchy, so move the call out of entity_tick.
Signed-off-by: Rik van Riel <riel@...riel.com>`
Header from folded patch 'fix-attach-detach_enticy_cfs_rq.patch':
Subject: sched,fair: fix attach/detach_entity_cfs_rq
While attach_entity_cfs_rq and detach_entity_cfs_rq should iterate over
the hierarchy, they do not need to so that twice.
Passing flags into propagate_entity_cfs_rq allows us to reuse that same
loop from other functions.
Signed-off-by: Rik van Riel <riel@...riel.com>
---
kernel/sched/fair.c | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 224cd9b20887..4c7e1818efba 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -872,10 +872,11 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq, int force)
static void update_curr(struct cfs_rq *cfs_rq)
{
struct sched_entity *curr = cfs_rq->curr;
+ struct task_struct *curtask;
u64 now = rq_clock_task(rq_of(cfs_rq));
u64 delta_exec;
- if (unlikely(!curr))
+ if (unlikely(!curr) || !entity_is_task(curr))
return;
delta_exec = now - curr->exec_start;
@@ -893,13 +894,10 @@ static void update_curr(struct cfs_rq *cfs_rq)
curr->vruntime += calc_delta_fair(delta_exec, curr);
update_min_vruntime(cfs_rq);
- if (entity_is_task(curr)) {
- struct task_struct *curtask = task_of(curr);
-
- trace_sched_stat_runtime(curtask, delta_exec, curr->vruntime);
- cgroup_account_cputime(curtask, delta_exec);
- account_group_exec_runtime(curtask, delta_exec);
- }
+ curtask = task_of(curr);
+ trace_sched_stat_runtime(curtask, delta_exec, curr->vruntime);
+ cgroup_account_cputime(curtask, delta_exec);
+ account_group_exec_runtime(curtask, delta_exec);
account_cfs_rq_runtime(cfs_rq, delta_exec);
}
@@ -4192,11 +4190,6 @@ static void put_prev_entity(struct cfs_rq *cfs_rq, struct sched_entity *prev)
static void
entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued)
{
- /*
- * Update run-time statistics of the 'current'.
- */
- update_curr(cfs_rq);
-
/*
* Ensure that runnable average is periodically updated.
*/
@@ -10025,6 +10018,11 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
struct cfs_rq *cfs_rq;
struct sched_entity *se = &curr->se;
+ /*
+ * Update run-time statistics of the 'current'.
+ */
+ update_curr(&rq->cfs);
+
for_each_sched_entity(se) {
cfs_rq = group_cfs_rq_of_parent(se);
entity_tick(cfs_rq, se, queued);
--
2.20.1
Powered by blists - more mailing lists