lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1DD7BFEDD3147247B1355BEFEFE46652379C3DF10C@HQMAIL04.nvidia.com>
Date:	Tue, 8 May 2012 14:39:33 -0700
From:	Diwakar Tundlam <dtundlam@...dia.com>
To:	'Peter Zijlstra' <a.p.zijlstra@...llo.nl>
CC:	'Ingo Molnar' <mingo@...nel.org>,
	'David Rientjes' <rientjes@...gle.com>,
	"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>,
	Peter De Schrijver <pdeschrijver@...dia.com>
Subject: [PATCH] sched: Make nr_uninterruptible count a signed value

Declare nr_uninterruptible as a signed long to avoid garbage values
seen in cat /proc/sched_debug when a task is moved to the run queue of
a newly online core. This is part of a global counter where only the
total sum over all CPUs matters.

Signed-off-by: Diwakar Tundlam <dtundlam@...dia.com>
---
 kernel/sched/core.c  |    7 ++++---
 kernel/sched/sched.h |    2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 8d5eef6..7a64b5b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2114,7 +2114,8 @@ unsigned long nr_running(void)
 
 unsigned long nr_uninterruptible(void)
 {
-	unsigned long i, sum = 0;
+	unsigned long i;
+	long sum = 0;
 
 	for_each_possible_cpu(i)
 		sum += cpu_rq(i)->nr_uninterruptible;
@@ -2123,7 +2124,7 @@ unsigned long nr_uninterruptible(void)
 	 * Since we read the counters lockless, it might be slightly
 	 * inaccurate. Do not allow it to go below zero though:
 	 */
-	if (unlikely((long)sum < 0))
+	if (unlikely(sum < 0))
 		sum = 0;
 
 	return sum;
@@ -2174,7 +2175,7 @@ static long calc_load_fold_active(struct rq *this_rq)
 	long nr_active, delta = 0;
 
 	nr_active = this_rq->nr_running;
-	nr_active += (long) this_rq->nr_uninterruptible;
+	nr_active += this_rq->nr_uninterruptible;
 
 	if (nr_active != this_rq->calc_load_active) {
 		delta = nr_active - this_rq->calc_load_active;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index fb3acba..2668b07 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -385,7 +385,7 @@ struct rq {
 	 * one CPU and if it got migrated afterwards it may decrease
 	 * it on another CPU. Always updated under the runqueue lock:
 	 */
-	unsigned long nr_uninterruptible;
+	long nr_uninterruptible;
 
 	struct task_struct *curr, *idle, *stop;
 	unsigned long next_balance;
-- 
1.7.4.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ