lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 Mar 2017 15:08:20 +0000
From:   Patrick Bellasi <patrick.bellasi@....com>
To:     "Rafael J. Wysocki" <rjw@...ysocki.net>
Cc:     Vincent Guittot <vincent.guittot@...aro.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Linux PM <linux-pm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Juri Lelli <juri.lelli@....com>,
        Joel Fernandes <joelaf@...gle.com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Ingo Molnar <mingo@...hat.com>
Subject: Re: [RFC][PATCH v2 2/2] cpufreq: schedutil: Avoid decreasing
 frequency of busy CPUs

On 21-Mar 15:46, Rafael J. Wysocki wrote:
> On Tuesday, March 21, 2017 02:38:42 PM Patrick Bellasi wrote:
> > On 21-Mar 15:26, Rafael J. Wysocki wrote:
> > > On Tuesday, March 21, 2017 02:37:08 PM Vincent Guittot wrote:
> > > > On 21 March 2017 at 14:22, Peter Zijlstra <peterz@...radead.org> wrote:
> > > > > On Tue, Mar 21, 2017 at 09:50:28AM +0100, Vincent Guittot wrote:
> > > > >> On 20 March 2017 at 22:46, Rafael J. Wysocki <rjw@...ysocki.net> wrote:
> > > > >
> > > > >> > To work around this issue use the observation that, from the
> > > > >> > schedutil governor's perspective, it does not make sense to decrease
> > > > >> > the frequency of a CPU that doesn't enter idle and avoid decreasing
> > > > >> > the frequency of busy CPUs.
> > > > >>
> > > > >> I don't fully agree with that statement.
> > > > >> If there are 2 runnable tasks on CPU A and scheduler migrates the
> > > > >> waiting task to another CPU B so CPU A is less loaded now, it makes
> > > > >> sense to reduce the OPP. That's even for that purpose that we have
> > > > >> decided to use scheduler metrics in cpufreq governor so we can adjust
> > > > >> OPP immediately when tasks migrate.
> > > > >> That being said, i probably know why you see such OPP switches in your
> > > > >> use case. When we migrate a task, we also migrate/remove its
> > > > >> utilization from CPU.
> > > > >> If the CPU is not overloaded, it means that runnable tasks have all
> > > > >> computation that they need and don't have any reason to use more when
> > > > >> a task migrates to another CPU. so decreasing the OPP makes sense
> > > > >> because the utilzation is decreasing
> > > > >> If the CPU is overloaded, it means that runnable tasks have to share
> > > > >> CPU time and probably don't have all computations that they would like
> > > > >> so when a task migrate, the remaining tasks on the CPU will increase
> > > > >> their utilization and fill space left by the task that has just
> > > > >> migrated. So the CPU's utilization will decrease when a task migrates
> > > > >> (and as a result the OPP) but then its utilization will increase with
> > > > >> remaining tasks running more time as well as the OPP
> > > > >>
> > > > >> So you need to make the difference between this 2 cases: Is a CPU
> > > > >> overloaded or not. You can't really rely on the utilization to detect
> > > > >> that but you could take advantage of the load which take into account
> > > > >> the waiting time of tasks
> > > > >
> > > > > I'm confused. What two cases? You only list the overloaded case, but he
> > > > 
> > > > overloaded vs not overloaded use case.
> > > > For the not overloaded case, it makes sense to immediately update to
> > > > OPP to be aligned with the new utilization of the CPU even if it was
> > > > not idle in the past couple of ticks
> > > 
> > > Yes, if the OPP (or P-state if you will) can be changed immediately.  If it can't,
> > > conditions may change by the time we actually update it and in that case It'd
> > > be better to wait and see IMO.
> > > 
> > > In any case, the theory about migrating tasks made sense to me, so below is
> > > what I tested.  It works, and besides it has a nice feature that I don't need
> > > to fetch for the timekeeping data. :-)
> > > 
> > > I only wonder if we want to do this or only prevent the frequency from
> > > decreasing in the overloaded case?
> > > 
> > > ---
> > >  kernel/sched/cpufreq_schedutil.c |    8 +++++---
> > >  1 file changed, 5 insertions(+), 3 deletions(-)
> > > 
> > > Index: linux-pm/kernel/sched/cpufreq_schedutil.c
> > > ===================================================================
> > > --- linux-pm.orig/kernel/sched/cpufreq_schedutil.c
> > > +++ linux-pm/kernel/sched/cpufreq_schedutil.c
> > > @@ -61,6 +61,7 @@ struct sugov_cpu {
> > >  	unsigned long util;
> > >  	unsigned long max;
> > >  	unsigned int flags;
> > > +	bool overload;
> > >  };
> > >  
> > >  static DEFINE_PER_CPU(struct sugov_cpu, sugov_cpu);
> > > @@ -207,7 +208,7 @@ static void sugov_update_single(struct u
> > >  	if (!sugov_should_update_freq(sg_policy, time))
> > >  		return;
> > >  
> > > -	if (flags & SCHED_CPUFREQ_RT_DL) {
> > > +	if ((flags & SCHED_CPUFREQ_RT_DL) || this_rq()->rd->overload) {
> > >  		next_f = policy->cpuinfo.max_freq;
> > 
> > Isn't this going to max OPP every time we have more than 1 task in
> > that CPU?
> > 
> > In that case it will not fit the case: we have two 10% tasks on that CPU.
> 
> Good point.
> 
> > Previous solution was better IMO, apart from using overloaded instead
> > of overutilized (which is not yet there) :-/
> 
> OK, so the one below works too.

Better... just one minor comment.


> ---
>  kernel/sched/cpufreq_schedutil.c |   11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> Index: linux-pm/kernel/sched/cpufreq_schedutil.c
> ===================================================================
> --- linux-pm.orig/kernel/sched/cpufreq_schedutil.c
> +++ linux-pm/kernel/sched/cpufreq_schedutil.c
> @@ -37,6 +37,7 @@ struct sugov_policy {
>  	s64 freq_update_delay_ns;
>  	unsigned int next_freq;
>  	unsigned int cached_raw_freq;
> +	bool overload;

Can we avoid using "overloaded" in favor of a more generic and
schedutil specific name. Mainly because in the future we would
probably like to switch from "overloaded" to "overutilized".

What about something like: "busy" ?

>  
>  	/* The next fields are only needed if fast switch cannot be used. */
>  	struct irq_work irq_work;
> @@ -61,6 +62,7 @@ struct sugov_cpu {
>  	unsigned long util;
>  	unsigned long max;
>  	unsigned int flags;
> +	bool overload;
>  };
>  
>  static DEFINE_PER_CPU(struct sugov_cpu, sugov_cpu);
> @@ -93,6 +95,9 @@ static void sugov_update_commit(struct s
>  {
>  	struct cpufreq_policy *policy = sg_policy->policy;
>  
> +	if (sg_policy->overload && next_freq < sg_policy->next_freq)
> +		next_freq = sg_policy->next_freq;
> +
>  	if (policy->fast_switch_enabled) {
>  		if (sg_policy->next_freq == next_freq) {
>  			trace_cpu_frequency(policy->cur, smp_processor_id());
> @@ -207,6 +212,8 @@ static void sugov_update_single(struct u
>  	if (!sugov_should_update_freq(sg_policy, time))
>  		return;
>  
> +	sg_policy->overload = this_rq()->rd->overload;

And than we can move this bit into an inline function, something like e.g.:

   static inline bool sugov_this_cpu_is_busy()
   {
           return this_rq()->rd->overloaded
   }

Where in future we can easily switch from usage of "overloaded" to
usage of "utilization".

> +
>  	if (flags & SCHED_CPUFREQ_RT_DL) {
>  		next_f = policy->cpuinfo.max_freq;
>  	} else {
> @@ -225,6 +232,8 @@ static unsigned int sugov_next_freq_shar
>  	unsigned long util = 0, max = 1;
>  	unsigned int j;
>  
> +	sg_policy->overload = false;
> +
>  	for_each_cpu(j, policy->cpus) {
>  		struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);
>  		unsigned long j_util, j_max;
> @@ -253,6 +262,7 @@ static unsigned int sugov_next_freq_shar
>  		}
>  
>  		sugov_iowait_boost(j_sg_cpu, &util, &max);
> +		sg_policy->overload = sg_policy->overload || sg_cpu->overload;
>  	}
>  
>  	return get_next_freq(sg_policy, util, max);
> @@ -273,6 +283,7 @@ static void sugov_update_shared(struct u
>  	sg_cpu->util = util;
>  	sg_cpu->max = max;
>  	sg_cpu->flags = flags;
> +	sg_cpu->overload = this_rq()->rd->overload;
>  
>  	sugov_set_iowait_boost(sg_cpu, time, flags);
>  	sg_cpu->last_update = time;
> 

-- 
#include <best/regards.h>

Patrick Bellasi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ