lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 18 Oct 2019 15:44:40 +0100
From:   Douglas Raillard <douglas.raillard@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        mingo@...hat.com, rjw@...ysocki.net, viresh.kumar@...aro.org,
        juri.lelli@...hat.com, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, qperret@...gle.com,
        patrick.bellasi@...bug.net, dh.han@...sung.com
Subject: Re: [RFC PATCH v3 0/6] sched/cpufreq: Make schedutil energy aware



On 10/18/19 1:07 PM, Peter Zijlstra wrote:
> On Fri, Oct 18, 2019 at 12:46:25PM +0100, Douglas Raillard wrote:
> 
>>> What I don't see is how that that difference makes sense as input to:
>>>
>>>     cost(x) : (1 + x) * cost_j
>>
>> The actual input is:
>> x = (EM_COST_MARGIN_SCALE/SCHED_CAPACITY_SCALE) * (util - util_est)
>>
>> Since EM_COST_MARGIN_SCALE == SCHED_CAPACITY_SCALE == 1024, this factor of 1
>> is not directly reflected in the code but is important for units
>> consistency.
> 
> But completely irrelevant for the actual math and conceptual
> understanding.

 > how that that difference makes sense as input to
I was unsure if you referred to the units being inconsistent or the 
actual way of computing values being strange, so I provided some 
justification for both.

> Just because computers suck at real numbers, and floats
> are expensive, doesn't mean we have to burden ourselves with fixed point
> when writing equations.
> 
> Also, as a physicist I'm prone to normalizing everything to 1, because
> that's lazy.
> 
>>> I suppose that limits the additional OPP to twice the previously
>>> selected cost / efficiency (see the confusion from that other email).
>>> But given that efficency drops (or costs rise) for higher OPPs that
>>> still doesn't really make sense..
> 
>> Yes, this current limit to +100% freq boosting is somehow arbitrary and
>> could probably benefit from being tunable in some way (Kconfig option
>> maybe). When (margin > 0), we end up selecting an OPP that has a higher cost
>> than the one strictly required, which is expected. The goal is to speed
>> things up at the expense of more power consumed to achieve the same work,
>> hence at a lower efficiency (== higher cost).
> 
> No, no Kconfig knobs.
> 
>> That's the main reason why this boosting apply a margin on the cost of the
>> selected OPP rather than just inflating the util. This allows controlling
>> directly how much more power (battery life) we are going to spend to achieve
>> some work that we know could be achieved with less power.
> 
> But you're not; the margin is relative to the OPP, it is not absolute.

Considering a CPU with 1024 max capacity (since we are not talking about 
migrations here, we can ignore CPU invariance):

work = normalized number of iterations of a given busy loop
# Thanks to freq invariance
work = util (between 0 and 1)
util = f/f_max

# f(work) is the min freq that is admissible for "work", which we will
# abbreviate as "f"
f(work) = work * f_max

# from struct em_cap_state doc in energy_model.h
cost(f) = power(f) * f_max / f
cost(f) = power(f) / util
cost(f) = power(f) / work
power(f) = cost(f) * work

boosted_cost(f) = cost(f) + x
boosted_power(f) = boosted_cost(f) * work
boosted_power(f) = (cost(f) + x) * work

# Let's normalize cost() so we can forget about f and deal only with work.
cost'(work) = cost(f)/cost(f_max)
x' = x/cost(f_max)
boosted_power'(work) = (cost'(work) + x') * work
boosted_power'(work) = cost'(work) * work + x' * work
boosted_power'(work) = power'(work) + x' * work
boosted_power'(work) = power'(work) + A(work)

# Over a duration T, spend an extra B unit of energy
B(work) = A(work) * T
lost_battery_percent(work) = 100 * B(work)/total_battery_energy
lost_battery_percent(work) = 100 * T * x' * work /total_battery_energy
lost_battery_percent(work) =
  (100 * T / cost(f_max) / total_battery_energy) * x * work

This means that the effect of boosting on battery life is proportional 
to "x" unless I made a mistake somewhere.

> 
> Or rather, the only actual limit is in relation to the max OPP. So you
> have very little actual control over how much more energy you're
> spending.
> 
>>> So while I agree that 2) is a reasonable signal to work from, everything
>>> that comes after is still much confusing me.
> 
>> "When applying these boosting rules on the runqueue util signals ...":
>> Assuming the set of enqueued tasks stays the same between 2 observations
>> from schedutil, if we see the rq util_avg increase above its
>> util_est.enqueued, that means that at least one task had its util_avg go
>> above util_est.enqueued. We might miss some boosting opportunities if some
>> (util - util_est) compensates:
>> TASK_1(util - util_est) = - TASK_2(util - util_est)
>> but working on the aggregated value is much easier in schedutil, to avoid
>> crawling the list of entities.
> 
> That still does not explain why 'util - util_est', when >0, makes for a
> sensible input into an OPP relative function > I agree that 'util - util_est', when >0, indicates utilization is
> increasing (for the aperiodic blah blah blah). But after that I'm still
> confused.

For the same reason PELT makes a sensible input for OPP selection.
Currently, OPP selection is based on max(util_avg, util_est.enqueued) 
(from cpu_util_cfs in sched.h), so as soon as we have
(util - util_est > 0), the OPP will be selected according to util_avg. 
In a way, using util_avg there is already some kind of boosting.

Since the boosting is essentially (util - constant), it grows the same 
way as util. If we think of (util - util_est) as being some estimation 
of how wrong we were in the estimation of the task "true" utilization of 
the CPU, then it makes sense to feed that to the boost. The wronger we 
were, the more we want to boost, because the more time passes, the more 
the scheduler realizes it actually does not know what the task needs. In 
doubt, provide a higher freq than usual until we get to know this task 
better. When that happens (at the next period), boosting is disabled and 
we revert to the usual behavior (aka margin=0).

Hope we are converging to some wording that makes sense.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ