lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 Mar 2017 14:23:51 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     luca abeni <luca.abeni@...tannapisa.it>
Cc:     linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@....com>,
        Claudio Scordino <claudio@...dence.eu.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Tommaso Cucinotta <tommaso.cucinotta@...up.it>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Joel Fernandes <joelaf@...gle.com>,
        Mathieu Poirier <mathieu.poirier@...aro.org>
Subject: Re: [RFC v5 2/9] sched/deadline: improve the tracking of active
 utilization

On Fri, Mar 24, 2017 at 04:52:55AM +0100, luca abeni wrote:
> @@ -2518,6 +2520,7 @@ static int dl_overflow(struct task_struct *p, int policy,
>  		   !__dl_overflow(dl_b, cpus, p->dl.dl_bw, new_bw)) {
>  		__dl_clear(dl_b, p->dl.dl_bw);
>  		__dl_add(dl_b, new_bw);
> +		dl_change_utilization(p, new_bw);
>  		err = 0;

Every time I see that I want to do this..


---
 kernel/sched/core.c     | 4 ++--
 kernel/sched/deadline.c | 2 +-
 kernel/sched/sched.h    | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3b31fc05a0f1..b845ee4b3e55 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2512,11 +2512,11 @@ static int dl_overflow(struct task_struct *p, int policy,
 		err = 0;
 	} else if (dl_policy(policy) && task_has_dl_policy(p) &&
 		   !__dl_overflow(dl_b, cpus, p->dl.dl_bw, new_bw)) {
-		__dl_clear(dl_b, p->dl.dl_bw);
+		__dl_sub(dl_b, p->dl.dl_bw);
 		__dl_add(dl_b, new_bw);
 		err = 0;
 	} else if (!dl_policy(policy) && task_has_dl_policy(p)) {
-		__dl_clear(dl_b, p->dl.dl_bw);
+		__dl_sub(dl_b, p->dl.dl_bw);
 		err = 0;
 	}
 	raw_spin_unlock(&dl_b->lock);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index a2ce59015642..229660088138 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1695,7 +1695,7 @@ static void set_cpus_allowed_dl(struct task_struct *p,
 		 * until we complete the update.
 		 */
 		raw_spin_lock(&src_dl_b->lock);
-		__dl_clear(src_dl_b, p->dl.dl_bw);
+		__dl_sub(src_dl_b, p->dl.dl_bw);
 		raw_spin_unlock(&src_dl_b->lock);
 	}
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 5cbf92214ad8..1a521324ecee 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -226,7 +226,7 @@ struct dl_bw {
 };
 
 static inline
-void __dl_clear(struct dl_bw *dl_b, u64 tsk_bw)
+void __dl_sub(struct dl_bw *dl_b, u64 tsk_bw)
 {
 	dl_b->total_bw -= tsk_bw;
 }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ