[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210601083616.804229-1-dietmar.eggemann@arm.com>
Date: Tue, 1 Jun 2021 10:36:16 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>
Cc: Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH] sched/fair: Return early from update_tg_cfs_load() if delta == 0
In case the _avg delta is 0 there is no need to update se's _avg
(level n) nor cfs_rq's _avg (level n-1). These values stay the same.
Since cfs_rq's _avg isn't changed, i.e. no load is propagated down,
cfs_rq's _sum should stay the same as well.
So bail out after se's _sum has been updated.
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
---
This patch is against current tip/sched/urgent, commit f268c3737eca
("tick/nohz: Only check for RCU deferred wakeup on user/guest entry
when needed").
It needs commit 7c7ad626d9a0 ("sched/fair: Keep load_avg and load_sum
synced").
kernel/sched/fair.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e7c8277e3d54..ce8e0e10e5d4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3548,9 +3548,12 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq
load_sum = (s64)se_weight(se) * runnable_sum;
load_avg = div_s64(load_sum, divider);
+ se->avg.load_sum = runnable_sum;
+
delta = load_avg - se->avg.load_avg;
+ if (!delta)
+ return;
- se->avg.load_sum = runnable_sum;
se->avg.load_avg = load_avg;
add_positive(&cfs_rq->avg.load_avg, delta);
--
2.25.1
Powered by blists - more mailing lists