lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160115070749.GA1914@X58A-UD3R>
Date:	Fri, 15 Jan 2016 16:07:49 +0900
From:	Byungchul Park <byungchul.park@....com>
To:	Dietmar Eggemann <dietmar.eggemann@....com>, perterz@...radead.org
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Chris Metcalf <cmetcalf@...hip.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Luiz Capitulino <lcapitulino@...hat.com>,
	Christoph Lameter <cl@...ux.com>,
	"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
	Mike Galbraith <efault@....de>, Rik van Riel <riel@...hat.com>
Subject: Re: [RFC PATCH 0/4] sched: Improve cpu load accounting with nohz

On Thu, Jan 14, 2016 at 10:23:46PM +0000, Dietmar Eggemann wrote:
> On 01/14/2016 09:27 PM, Peter Zijlstra wrote:
> >On Thu, Jan 14, 2016 at 09:19:00PM +0000, Dietmar Eggemann wrote:
> >>@@ -4346,7 +4346,10 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load,
> >>
> >>                 /* scale is effectively 1 << i now, and >> i divides by scale */
> >>
> >>-               old_load = this_rq->cpu_load[i] - tickless_load;
> >>+               if (this_rq->cpu_load[i] > tickless_load)
> >>+                       old_load = this_rq->cpu_load[i] - tickless_load;
> >>+               else
> >>+                       old_load = 0;
> >
> >Yeah, yuck. That'd go bad quick.
> >
> 
> ... because I set it to 0? But after the decay function we add
> tickless_load to old_load. Maybe in case tickless_load >
> this_rq->cpu_load[i] we decay this_rq->cpu_load[i] and do not add
> tickless_load afterwards.
> 

I re-checked the equation I expanded and fortunately found it had no
problem. I think there are several ways to do it correctly. That is,

option 1. decay the absolute value with decay_load_missed() and adjust
the sign.

option 2. make decay_load_missed() can handle negative value.

option 3. refer to the patch below. I think this option is the best.

-----8<-----

>From ba3d3355fcce51c901376d268206f58a7d0e4214 Mon Sep 17 00:00:00 2001
From: Byungchul Park <byungchul.park@....com>
Date: Fri, 15 Jan 2016 15:58:09 +0900
Subject: [PATCH] sched/fair: prevent using decay_load_missed() with a negative
 value

decay_load_missed() cannot handle nagative value. So we need to prevent
using the function with a negative value.

Signed-off-by: Byungchul Park <byungchul.park@....com>
---
 kernel/sched/fair.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8dde8b6..3f08d75 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4443,8 +4443,14 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load,
 
 		/* scale is effectively 1 << i now, and >> i divides by scale */
 
-		old_load = this_rq->cpu_load[i] - tickless_load;
+		old_load = this_rq->cpu_load[i];
 		old_load = decay_load_missed(old_load, pending_updates - 1, i);
+		old_load -= decay_load_missed(tickless_load, pending_updates - 1, i);
+		/*
+		 * old_load can never be a negative value because a decayed
+		 * tickless_load cannot be greater than the original
+		 * tickless_load.
+		 */
 		old_load += tickless_load;
 		new_load = this_load;
 		/*
-- 
1.9.1


> 
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ