lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 4 Mar 2016 14:19:06 +0100
From:	"Rafael J. Wysocki" <rafael@...nel.org>
To:	Juri Lelli <juri.lelli@....com>
Cc:	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	Linux PM list <linux-pm@...r.kernel.org>,
	Steve Muckle <steve.muckle@...aro.org>,
	ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Michael Turquette <mturquette@...libre.com>,
	Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH v2 10/10] cpufreq: schedutil: New governor based on
 scheduler utilization data

On Fri, Mar 4, 2016 at 12:26 PM, Juri Lelli <juri.lelli@....com> wrote:
> Hi Rafael,

Hi,

> On 04/03/16 04:35, Rafael J. Wysocki wrote:
>> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
>>
>> Add a new cpufreq scaling governor, called "schedutil", that uses
>> scheduler-provided CPU utilization information as input for making
>> its decisions.
>>
>> Doing that is possible after commit fe7034338ba0 (cpufreq: Add
>> mechanism for registering utilization update callbacks) that
>> introduced cpufreq_update_util() called by the scheduler on
>> utilization changes (from CFS) and RT/DL task status updates.
>> In particular, CPU frequency scaling decisions may be based on
>> the the utilization data passed to cpufreq_update_util() by CFS.
>>
>> The new governor is relatively simple.
>>
>> The frequency selection formula used by it is
>>
>>       next_freq = util * max_freq / max
>>
>> where util and max are the utilization and CPU capacity coming from CFS.
>>
>
> The formula looks better to me now. However, problem is that, if you
> have freq. invariance, util will slowly saturate to the current
> capacity. So, we won't trigger OPP changes for a task that for example
> starts light and then becomes big.
>
> This is the same problem we faced with schedfreq. The current solution
> there is to use a margin for calculating a threshold (80% of current
> capacity ATM). Once util goes above that threshold we trigger an OPP
> change.  Current policy is pretty aggressive, we go to max_f and then
> adapt to the "real" util during successive enqueues. This was also
> tought to cope with the fact that PELT seems slow to react to abrupt
> changes in tasks behaviour.
>
> I'm not saying this is the definitive solution, but I fear something
> along this line is needed when you add freq invariance in the mix.

I really would like to avoid adding factors that need to be determined
experimentally, because the result of that tends to depend on the
system where the experiment is carried out and tunables simply don't
work (99% or maybe even more users don't change the defaults anyway).

So I would really like to use a formula that's based on some science
and doesn't depend on additional input.

Now, since the equation generally is f = a * x + b (f - frequency, x =
util/max) and there are good arguments for b = 0, it all boils down to
what number to take as a.  a = max_freq is a good candidate (that's
what I'm using right now), but it may turn out to be too small.
Another reasonable candidate is a = min_freq + max_freq, because then
x = 0.5 selects the frequency in the middle of the available range,
but that may turn out to be way too big if min_freq is high (like
higher that 50% of max_freq).

I need to think more about that and admittedly my understanding of the
frequency invariance consequences is limited ATM.

Thanks,
Rafael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ