lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 13 Jan 2016 17:01:28 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Byungchul Park <byungchul.park@....com>,
	Chris Metcalf <cmetcalf@...hip.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Luiz Capitulino <lcapitulino@...hat.com>,
	Christoph Lameter <cl@...ux.com>,
	"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
	Mike Galbraith <efault@....de>, Rik van Riel <riel@...hat.com>
Subject: [PATCH 1/4] sched: Don't account tickless CPU load on tick

The cpu load update on tick doesn't care about dynticks and as such is
buggy when occuring on nohz ticks (including idle ticks) as it resets
the jiffies snapshot that was recorded on nohz entry. We eventually
ignore the potentially long tickless load that happened before the
tick.

We can fix this in two ways:

1) Handle the tickless load, but then we must make sure that a freshly
   woken task's load doesn't get accounted as the whole previous tickless
   load.

2) Ignore nohz ticks and delay the accounting to the nohz exit point.

For simplicity, this patch propose to fix the issue with the second
solution.

Cc: Byungchul Park <byungchul.park@....com>
Cc: Mike Galbraith <efault@....de>
Cc: Chris Metcalf <cmetcalf@...hip.com>
Cc: Christoph Lameter <cl@...ux.com>
Cc: Luiz Capitulino <lcapitulino@...hat.com>
Cc: Paul E . McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Rik van Riel <riel@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
---
 kernel/sched/fair.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1093873..b849ea8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4518,10 +4518,20 @@ void update_cpu_load_nohz(int active)
  */
 void update_cpu_load_active(struct rq *this_rq)
 {
-	unsigned long load = weighted_cpuload(cpu_of(this_rq));
+	unsigned long load;
+
+	/*
+	 * If the tick is stopped, we can't reliably update the
+	 * load without risking to spuriously account the weight
+	 * of a freshly woken task as the whole weight of a long
+	 * tickless period.
+	 */
+	if (tick_nohz_tick_stopped())
+		return;
 	/*
 	 * See the mess around update_idle_cpu_load() / update_cpu_load_nohz().
 	 */
+	load = weighted_cpuload(cpu_of(this_rq));
 	this_rq->last_load_update_tick = jiffies;
 	__update_cpu_load(this_rq, load, 1, 1);
 }
-- 
2.6.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ