lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20161027174108.31139-5-patrick.bellasi@arm.com>
Date:   Thu, 27 Oct 2016 18:41:04 +0100
From:   Patrick Bellasi <patrick.bellasi@....com>
To:     linux-kernel@...r.kernel.org
Cc:     Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Steve Muckle <steve.muckle@...aro.org>,
        Leo Yan <leo.yan@...aro.org>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        "Rafael J . Wysocki" <rjw@...ysocki.net>,
        Todd Kjos <tkjos@...gle.com>,
        Srinath Sridharan <srinathsr@...gle.com>,
        Andres Oportus <andresoportus@...gle.com>,
        Juri Lelli <juri.lelli@....com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Chris Redpath <chris.redpath@....com>,
        Robin Randhawa <robin.randhawa@....com>,
        Patrick Bellasi <patrick.bellasi@....com>
Subject: [RFC v2 4/8] sched/fair: add boosted CPU usage

The CPU utilization signal (cpu_rq(cpu)->cfs.avg.util_avg) is used by
the scheduler as an estimation of the overall bandwidth currently
allocated on a CPU. When the schedutil CPUFreq governor is in use, this
signal drives the selection of the Operating Performance Points (OPP)
required to accommodate all the workload allocated on that CPU.

A convenient way to boost the performance of tasks running on a CPU,
which is also little intrusive, is to boost the CPU utilization signal
each time it is used to select an OPP.

This patch introduces a new function:
  boosted_cpu_util(cpu)
to return a boosted value for the usage of a specified CPU.

The margin added to the original usage is:
  1. computed based on the "boosting strategy" in use
  2. proportional to the system-wide boost value defined via the
     provided user-space interface

The boosted signal is used by schedutil (transparently) each time it
requires an estimation of the capacity required by CFS tasks
which are currently RUNNABLE a CPU.

It's worth to notice that the RUNNABLE status is used to defined _when_
a CPU needs to be boosted. While _what_ we boost is the CPU utilization
which includes also the blocked utilization.

Currently SchedTune is available only for CONFIG_SMP system, thus we
have a single point of integration with schedutil which is provided by
the cfs_rq_util_change(cfs_rq) function which ultimately calls into:
  kernel/sched/cpufreq_schedutil.c::sugov_get_util(util, max)
Each time a CFS utilization update is required, if SchedTune is compiled
in, we use the global boost value to update the CFS utilization required
by the CFS class.

Such a simple mechanism allows for example to use schedutil to mimics
the behaviors of other governors, i.e. performance (when boost=100% and
only while there are RUNNABLE tasks on that CPU).

Cc: Ingo Molnar <mingo@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@....com>
---
 kernel/sched/cpufreq_schedutil.c |  4 ++--
 kernel/sched/fair.c              | 36 ++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h             |  2 ++
 3 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 69e0689..0382df7 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -148,12 +148,12 @@ static unsigned int get_next_freq(struct sugov_cpu *sg_cpu, unsigned long util,
 
 static void sugov_get_util(unsigned long *util, unsigned long *max)
 {
-	struct rq *rq = this_rq();
 	unsigned long cfs_max;
 
 	cfs_max = arch_scale_cpu_capacity(NULL, smp_processor_id());
 
-	*util = min(rq->cfs.avg.util_avg, cfs_max);
+	*util = boosted_cpu_util(smp_processor_id());
+	*util = min(*util, cfs_max);
 	*max = cfs_max;
 }
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fdacc29..26c3911 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5578,6 +5578,25 @@ schedtune_margin(unsigned long signal, unsigned int boost)
 	return margin;
 }
 
+static inline unsigned long
+schedtune_cpu_margin(unsigned long util, int cpu)
+{
+	unsigned int boost = get_sysctl_sched_cfs_boost();
+
+	if (boost == 0)
+		return 0UL;
+
+	return schedtune_margin(util, boost);
+}
+
+#else /* CONFIG_SCHED_TUNE */
+
+static inline unsigned long
+schedtune_cpu_margin(unsigned long util, int cpu)
+{
+	return 0;
+}
+
 #endif /* CONFIG_SCHED_TUNE */
 
 /*
@@ -5614,6 +5633,23 @@ static int cpu_util(int cpu)
 	return (util >= capacity) ? capacity : util;
 }
 
+unsigned long boosted_cpu_util(int cpu)
+{
+	unsigned long util = cpu_rq(cpu)->cfs.avg.util_avg;
+	unsigned long capacity = capacity_orig_of(cpu);
+
+	/* Do not boost saturated utilizations */
+	if (util >= capacity)
+		return capacity;
+
+	/* Add margin to current CPU's capacity */
+	util += schedtune_cpu_margin(util, cpu);
+	if (util >= capacity)
+		return capacity;
+
+	return util;
+}
+
 static inline int task_util(struct task_struct *p)
 {
 	return p->se.avg.util_avg;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 055f935..fd85818 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1764,6 +1764,8 @@ static inline u64 irq_time_read(int cpu)
 }
 #endif /* CONFIG_IRQ_TIME_ACCOUNTING */
 
+unsigned long boosted_cpu_util(int cpu);
+
 #ifdef CONFIG_CPU_FREQ
 DECLARE_PER_CPU(struct update_util_data *, cpufreq_update_util_data);
 
-- 
2.10.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ