[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1471110171.445125783@decadent.org.uk>
Date: Sat, 13 Aug 2016 18:42:51 +0100
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"Damien Wyart" <damien.wyart@...e.fr>,
"Ingo Molnar" <mingo@...nel.org>,
"Vik Heyndrickx" <vik.heyndrickx@...ibox.net>,
"Linus Torvalds" <torvalds@...ux-foundation.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
"Mike Galbraith" <efault@....de>,
"Doug Smythies" <dsmythies@...us.net>
Subject: [PATCH 3.16 073/305] sched/loadavg: Fix loadavg artifacts on
fully idle and on fully loaded systems
3.16.37-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Vik Heyndrickx <vik.heyndrickx@...ibox.net>
commit 20878232c52329f92423d27a60e48b6a6389e0dd upstream.
Systems show a minimal load average of 0.00, 0.01, 0.05 even when they
have no load at all.
Uptime and /proc/loadavg on all systems with kernels released during the
last five years up until kernel version 4.6-rc5, show a 5- and 15-minute
minimum loadavg of 0.01 and 0.05 respectively. This should be 0.00 on
idle systems, but the way the kernel calculates this value prevents it
from getting lower than the mentioned values.
Likewise but not as obviously noticeable, a fully loaded system with no
processes waiting, shows a maximum 1/5/15 loadavg of 1.00, 0.99, 0.95
(multiplied by number of cores).
Once the (old) load becomes 93 or higher, it mathematically can never
get lower than 93, even when the active (load) remains 0 forever.
This results in the strange 0.00, 0.01, 0.05 uptime values on idle
systems. Note: 93/2048 = 0.0454..., which rounds up to 0.05.
It is not correct to add a 0.5 rounding (=1024/2048) here, since the
result from this function is fed back into the next iteration again,
so the result of that +0.5 rounding value then gets multiplied by
(2048-2037), and then rounded again, so there is a virtual "ghost"
load created, next to the old and active load terms.
By changing the way the internally kept value is rounded, that internal
value equivalent now can reach 0.00 on idle, and 1.00 on full load. Upon
increasing load, the internally kept load value is rounded up, when the
load is decreasing, the load value is rounded down.
The modified code was tested on nohz=off and nohz kernels. It was tested
on vanilla kernel 4.6-rc5 and on centos 7.1 kernel 3.10.0-327. It was
tested on single, dual, and octal cores system. It was tested on virtual
hosts and bare hardware. No unwanted effects have been observed, and the
problems that the patch intended to fix were indeed gone.
Tested-by: Damien Wyart <damien.wyart@...e.fr>
Signed-off-by: Vik Heyndrickx <vik.heyndrickx@...ibox.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Doug Smythies <dsmythies@...us.net>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Fixes: 0f004f5a696a ("sched: Cure more NO_HZ load average woes")
Link: http://lkml.kernel.org/r/e8d32bff-d544-7748-72b5-3c86cc71f09f@veribox.net
Signed-off-by: Ingo Molnar <mingo@...nel.org>
[bwh: Backported to 3.16: adjust filename]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
kernel/sched/proc.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
--- a/kernel/sched/proc.c
+++ b/kernel/sched/proc.c
@@ -104,10 +104,13 @@ long calc_load_fold_active(struct rq *th
static unsigned long
calc_load(unsigned long load, unsigned long exp, unsigned long active)
{
- load *= exp;
- load += active * (FIXED_1 - exp);
- load += 1UL << (FSHIFT - 1);
- return load >> FSHIFT;
+ unsigned long newload;
+
+ newload = load * exp + active * (FIXED_1 - exp);
+ if (active >= load)
+ newload += FIXED_1-1;
+
+ return newload / FIXED_1;
}
#ifdef CONFIG_NO_HZ_COMMON
Powered by blists - more mailing lists