[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <157658639209.30329.5550277285445533985.tip-bot2@tip-bot2>
Date: Tue, 17 Dec 2019 12:39:52 -0000
From: "tip-bot2 for Peng Wang" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Peng Wang <rocking@...ux.alibaba.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
x86 <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: [tip: sched/core] schied/fair: Skip calculating @contrib without load
The following commit has been merged into the sched/core branch of tip:
Commit-ID: d040e0734fb3dedfe24c3d94f5a32b4812eca610
Gitweb: https://git.kernel.org/tip/d040e0734fb3dedfe24c3d94f5a32b4812eca610
Author: Peng Wang <rocking@...ux.alibaba.com>
AuthorDate: Fri, 13 Dec 2019 11:45:40 +08:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Tue, 17 Dec 2019 13:32:51 +01:00
schied/fair: Skip calculating @contrib without load
Because of the:
if (!load)
runnable = running = 0;
clause in ___update_load_sum(), all the actual users of @contrib in
accumulate_sum():
if (load)
sa->load_sum += load * contrib;
if (runnable)
sa->runnable_load_sum += runnable * contrib;
if (running)
sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;
don't happen, and therefore we don't care what @contrib actually is and
calculating it is pointless.
If we count the times when @load equals zero and not as below:
if (load) {
load_is_not_zero_count++;
contrib = __accumulate_pelt_segments(periods,
1024 - sa->period_contrib,delta);
} else
load_is_zero_count++;
As we can see, load_is_zero_count is much bigger than
load_is_zero_count, and the gap is gradually widening:
load_is_zero_count: 6016044 times
load_is_not_zero_count: 244316 times
19:50:43 up 1 min, 1 user, load average: 0.09, 0.06, 0.02
load_is_zero_count: 7956168 times
load_is_not_zero_count: 261472 times
19:51:42 up 2 min, 1 user, load average: 0.03, 0.05, 0.01
load_is_zero_count: 10199896 times
load_is_not_zero_count: 278364 times
19:52:51 up 3 min, 1 user, load average: 0.06, 0.05, 0.01
load_is_zero_count: 14333700 times
load_is_not_zero_count: 318424 times
19:54:53 up 5 min, 1 user, load average: 0.01, 0.03, 0.00
Perhaps we can gain some performance advantage by saving these
unnecessary calculation.
Signed-off-by: Peng Wang <rocking@...ux.alibaba.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Vincent Guittot < vincent.guittot@...aro.org>
Link: https://lkml.kernel.org/r/1576208740-35609-1-git-send-email-rocking@linux.alibaba.com
---
kernel/sched/pelt.c | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index a96db50..bd006b7 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -129,8 +129,20 @@ accumulate_sum(u64 delta, struct sched_avg *sa,
* Step 2
*/
delta %= 1024;
- contrib = __accumulate_pelt_segments(periods,
- 1024 - sa->period_contrib, delta);
+ if (load) {
+ /*
+ * This relies on the:
+ *
+ * if (!load)
+ * runnable = running = 0;
+ *
+ * clause from ___update_load_sum(); this results in
+ * the below usage of @contrib to dissapear entirely,
+ * so no point in calculating it.
+ */
+ contrib = __accumulate_pelt_segments(periods,
+ 1024 - sa->period_contrib, delta);
+ }
}
sa->period_contrib = delta;
@@ -205,7 +217,9 @@ ___update_load_sum(u64 now, struct sched_avg *sa,
* This means that weight will be 0 but not running for a sched_entity
* but also for a cfs_rq if the latter becomes idle. As an example,
* this happens during idle_balance() which calls
- * update_blocked_averages()
+ * update_blocked_averages().
+ *
+ * Also see the comment in accumulate_sum().
*/
if (!load)
runnable = running = 0;
Powered by blists - more mailing lists