lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu,  9 Nov 2017 16:41:14 +0000
From:   Patrick Bellasi <>
Cc:     Ingo Molnar <>,
        Peter Zijlstra <>,
        "Rafael J . Wysocki" <>,
        Viresh Kumar <>,
        Vincent Guittot <>,
        Paul Turner <>,
        Dietmar Eggemann <>,
        Morten Rasmussen <>,
        Juri Lelli <>,
        Todd Kjos <>,
        Joel Fernandes <>
Subject: [PATCH 1/4] sched/fair: always used unsigned long for utilization

Utilization and capacity are tracked as unsigned long, however some
functions using them returns an int which is ultimately assigned back to
unsigned long variables.

Since there is not scope on using a different and signed type, this
consolidate the signature of functions returning utilization to always
use the native type.
As well as improving code consistency this is expected also benefits
code paths where utilizations should be clamped by voiding further type
conversions or ugly type casts.

Signed-off-by: Patrick Bellasi <>
Reviewed-by: Chris Redpath <>
Reviewed-by: Brendan Jackman <>
Reviewed-by: Dietmar Eggemann <>
Cc: Ingo Molnar <>
Cc: Peter Zijlstra <>
Cc: Vincent Guittot <>
Cc: Morten Rasmussen <>
Cc: Dietmar Eggemann <>
 kernel/sched/fair.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5c09ddf8c832..83bc5d69fe3a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5438,8 +5438,8 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
 	return affine;
-static inline int task_util(struct task_struct *p);
-static int cpu_util_wake(int cpu, struct task_struct *p);
+static inline unsigned long task_util(struct task_struct *p);
+static unsigned long cpu_util_wake(int cpu, struct task_struct *p);
 static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
@@ -5870,7 +5870,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
  * capacity_orig) as it useful for predicting the capacity required after task
  * migrations (scheduler-driven DVFS).
-static int cpu_util(int cpu)
+static unsigned long cpu_util(int cpu)
 	unsigned long util = cpu_rq(cpu)->cfs.avg.util_avg;
 	unsigned long capacity = capacity_orig_of(cpu);
@@ -5878,7 +5878,7 @@ static int cpu_util(int cpu)
 	return (util >= capacity) ? capacity : util;
-static inline int task_util(struct task_struct *p)
+static inline unsigned long task_util(struct task_struct *p)
 	return p->se.avg.util_avg;
@@ -5887,7 +5887,7 @@ static inline int task_util(struct task_struct *p)
  * cpu_util_wake: Compute cpu utilization with any contributions from
  * the waking task p removed.
-static int cpu_util_wake(int cpu, struct task_struct *p)
+static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
 	unsigned long util, capacity;

Powered by blists - more mailing lists