lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 23 May 2015 20:01:25 +0200
From:	Nicholas Mc Guire <hofrat@...dl.org>
To:	Ingo Molnar <mingo@...hat.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org,
	Nicholas Mc Guire <hofrat@...dl.org>
Subject: [PATCH RFC] sched: remove implicit unsigned long - int - unsigned long conversion

Type-checking coccinelle spatches are being used to locate type mismatches
between function signatures and return values in this case this produced:
kernel/sched/fair.c:4987 WARNING: return of wrong type 
         int != unsigned long

get_cpu_usage() has one user update_sg_lb_stats() in which it is called:
  sgs->group_usage += get_cpu_usage(i);
as group_usage (from struct sg_lb_stats) is unsigned long this
effectively is a unsigned long -> int -> unsigned long automatic type
conversion which has no effect as the return of get_cpu_usage() never 
can exceed SCHED_LOAD_SCALE which is < INT_MAX. (on both 64 and 32bit
systems).

proposal: make get_cpu_usage return unsigned long to make this type clean

patch was compile tested for x86_64_defconfig 

patch is against 4.1-rc4 (localversion-next is -next-20150522)

Signed-off-by: Nicholas Mc Guire <hofrat@...dl.org>
---

A question that popped up during code review:
"usage" in get_cpu_usage() is in the range of [0..SCHED_LOAD_SCALE] which
is 2**10 (could be up to 2**20 (currently disabled see sched.h:55) but
from code reading it seems that group_usage would eventually overflow as it 
is only being incremented but never decremented or reset ?

load_balance()
 -> find_busiest_group()
   -> update_sd_lb_stats()
     -> update_sg_lb_stats()
        ...
        sgs->group_usage += get_cpu_usage(i); 

which returns (unsigned long) utilization_load_avg or cpu_capacity_orig
and seems to always be positive, so group_usage would eventually overflow.
On 64bit systems this overflow would probably simply never happen but on 
32bit ?

what am I missing here ?

 kernel/sched/fair.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1dbeea9..7c169a8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4978,7 +4978,7 @@ done:
  * Without capping the usage, a group could be seen as overloaded (CPU0 usage
  * at 121% + CPU1 usage at 80%) whereas CPU1 has 20% of available capacity
  */
-static int get_cpu_usage(int cpu)
+static unsigned long get_cpu_usage(int cpu)
 {
 	unsigned long usage = cpu_rq(cpu)->cfs.utilization_load_avg;
 	unsigned long capacity = capacity_orig_of(cpu);
-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ