lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 10 Jan 2018 04:20:47 -0800
From:   tip-bot for Juri Lelli <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     rafael.j.wysocki@...el.com, torvalds@...ux-foundation.org,
        tglx@...utronix.de, juri.lelli@....com, mingo@...nel.org,
        hpa@...or.com, peterz@...radead.org, claudio@...dence.eu.com,
        linux-kernel@...r.kernel.org, luca.abeni@...tannapisa.it,
        viresh.kumar@...aro.org
Subject: [tip:sched/core] sched/deadline: Make bandwidth enforcement
 scale-invariant

Commit-ID:  07881166a892fa4908ac4924660a7793f75d6544
Gitweb:     https://git.kernel.org/tip/07881166a892fa4908ac4924660a7793f75d6544
Author:     Juri Lelli <juri.lelli@....com>
AuthorDate: Mon, 4 Dec 2017 11:23:25 +0100
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 10 Jan 2018 12:53:35 +0100

sched/deadline: Make bandwidth enforcement scale-invariant

Apply frequency and CPU scale-invariance correction factor to bandwidth
enforcement (similar to what we already do to fair utilization tracking).

Each delta_exec gets scaled considering current frequency and maximum
CPU capacity; which means that the reservation runtime parameter (that
need to be specified profiling the task execution at max frequency on
biggest capacity core) gets thus scaled accordingly.

Signed-off-by: Juri Lelli <juri.lelli@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Claudio Scordino <claudio@...dence.eu.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Luca Abeni <luca.abeni@...tannapisa.it>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Rafael J . Wysocki <rafael.j.wysocki@...el.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Viresh Kumar <viresh.kumar@...aro.org>
Cc: alessio.balsini@....com
Cc: bristot@...hat.com
Cc: dietmar.eggemann@....com
Cc: joelaf@...gle.com
Cc: juri.lelli@...hat.com
Cc: mathieu.poirier@...aro.org
Cc: morten.rasmussen@....com
Cc: patrick.bellasi@....com
Cc: rjw@...ysocki.net
Cc: rostedt@...dmis.org
Cc: tkjos@...roid.com
Cc: tommaso.cucinotta@...tannapisa.it
Cc: vincent.guittot@...aro.org
Link: http://lkml.kernel.org/r/20171204102325.5110-9-juri.lelli@redhat.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 kernel/sched/deadline.c | 26 ++++++++++++++++++++++----
 kernel/sched/fair.c     |  2 --
 kernel/sched/sched.h    |  2 ++
 3 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 54a0dc1..9bb0e0c 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1151,7 +1151,8 @@ static void update_curr_dl(struct rq *rq)
 {
 	struct task_struct *curr = rq->curr;
 	struct sched_dl_entity *dl_se = &curr->dl;
-	u64 delta_exec;
+	u64 delta_exec, scaled_delta_exec;
+	int cpu = cpu_of(rq);
 
 	if (!dl_task(curr) || !on_dl_rq(dl_se))
 		return;
@@ -1185,9 +1186,26 @@ static void update_curr_dl(struct rq *rq)
 	if (dl_entity_is_special(dl_se))
 		return;
 
-	if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM))
-		delta_exec = grub_reclaim(delta_exec, rq, &curr->dl);
-	dl_se->runtime -= delta_exec;
+	/*
+	 * For tasks that participate in GRUB, we implement GRUB-PA: the
+	 * spare reclaimed bandwidth is used to clock down frequency.
+	 *
+	 * For the others, we still need to scale reservation parameters
+	 * according to current frequency and CPU maximum capacity.
+	 */
+	if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) {
+		scaled_delta_exec = grub_reclaim(delta_exec,
+						 rq,
+						 &curr->dl);
+	} else {
+		unsigned long scale_freq = arch_scale_freq_capacity(cpu);
+		unsigned long scale_cpu = arch_scale_cpu_capacity(NULL, cpu);
+
+		scaled_delta_exec = cap_scale(delta_exec, scale_freq);
+		scaled_delta_exec = cap_scale(scaled_delta_exec, scale_cpu);
+	}
+
+	dl_se->runtime -= scaled_delta_exec;
 
 throttle:
 	if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) {
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1485975..1070803 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3089,8 +3089,6 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
 	return c1 + c2 + c3;
 }
 
-#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
-
 /*
  * Accumulate the three separate parts of the sum; d1 the remainder
  * of the last (incomplete) period, d2 the span of full periods and d3
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e122c89..2e95505 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -156,6 +156,8 @@ static inline int task_has_dl_policy(struct task_struct *p)
 	return dl_policy(p->policy);
 }
 
+#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
+
 /*
  * !! For sched_setattr_nocheck() (kernel) only !!
  *

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ