lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Nov 2008 17:08:52 +0530
From:	Balbir Singh <balbir@...ux.vnet.ibm.com>
To:	Ingo Molnar <mingo@...e.hu>,
	Linux Kernel <linux-kernel@...r.kernel.org>,
	Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Vaidyanathan S <svaidyan@...ibm.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Bharata B Rao <bharata@...ux.vnet.ibm.com>
Cc:	efault@....de
Subject: Re: [PATCH][mmotm] Sched fix stale value in average load per task

* Balbir Singh <balbir@...ux.vnet.ibm.com> [2008-11-12 16:19:00]:

> +	else
> +		rq->avg_load_per_task = 0;
>  
>  	return rq->avg_load_per_task;

On second thoughts, does this look better? (Based on Mike Galbraith's
suggestion)


cpu_avg_load_per_task() returns a stale value when nr_running is 0. It returns
an older stale (caculated when nr_running was non zero) value. This patch
returns and sets rq->avg_load_per_task to zero when nr_running is 0.

Signed-off-by: Balbir Singh <balbir@...ux.vnet.ibm.com>
---

 kernel/sched.c |   10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff -puN kernel/sched.c~sched-fix-stale-load-avg kernel/sched.c
--- linux-2.6.28-rc4/kernel/sched.c~sched-fix-stale-load-avg	2008-11-12 15:55:38.000000000 +0530
+++ linux-2.6.28-rc4-balbir/kernel/sched.c	2008-11-12 16:40:26.000000000 +0530
@@ -571,8 +571,6 @@ struct rq {
 	int cpu;
 	int online;
 
-	unsigned long avg_load_per_task;
-
 	struct task_struct *migration_thread;
 	struct list_head migration_queue;
 #endif
@@ -1428,10 +1426,10 @@ static unsigned long cpu_avg_load_per_ta
 {
 	struct rq *rq = cpu_rq(cpu);
 
-	if (rq->nr_running)
-		rq->avg_load_per_task = rq->load.weight / rq->nr_running;
-
-	return rq->avg_load_per_task;
+	if (unlikely(!rq->nr_running))
+		return 0;
+	else
+		return rq->load.weight / rq->nr_running;
 }
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
_

 

-- 
	Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ