lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 14 Jul 2014 15:04:35 +0100
From:	Morten Rasmussen <>
To:	Peter Zijlstra <>
Cc:	Vincent Guittot <>,
	Ingo Molnar <>,
	linux-kernel <>,
	Russell King - ARM Linux <>,
	LAK <>,
	Preeti U Murthy <>,
	Mike Galbraith <>,
	Nicolas Pitre <>,
	"" <>,
	Daniel Lezcano <>,
	Dietmar Eggemann <>
Subject: Re: [PATCH v3 09/12] Revert "sched: Put rq's sched_avg under

On Mon, Jul 14, 2014 at 02:20:52PM +0100, Peter Zijlstra wrote:
> On Mon, Jul 14, 2014 at 01:55:29PM +0100, Morten Rasmussen wrote:
> > On Fri, Jul 11, 2014 at 09:12:38PM +0100, Peter Zijlstra wrote:
> > > On Fri, Jul 11, 2014 at 07:39:29PM +0200, Vincent Guittot wrote:
> > > > In my mind, arch_scale_cpu_freq was intend to scale the capacity of
> > > > the CPU according to the current dvfs operating point.
> > > > As it's no more use anywhere now that we have arch_scale_cpu, we could
> > > > probably remove it .. and see when it will become used.
> > > 
> > > I probably should have written comments when I wrote that code, but it
> > > was meant to be used only where, as described above, we limit things.
> > > Ondemand and such, which will temporarily decrease freq, will ramp it up
> > > again at demand, and therefore lowering the capacity will skew things.
> > > 
> > > You'll put less load on because its run slower, and then you'll run it
> > > slower because there's less load on -> cyclic FAIL.
> > 
> > Agreed. We can't use a frequency scaled compute capacity for all
> > load-balancing decisions. However, IMHO, it would be useful to have know
> > the current compute capacity in addition to the max compute capacity
> > when considering energy costs. So we would have something like:
> > 
> > * capacity_max: cpu capacity at highest frequency.
> > 
> > * capacity_cur: cpu capacity at current frequency.
> > 
> > * capacity_avail: cpu capacity currently available. Basically
> >   capacity_cur taking rt, deadline, and irq accounting into account.
> > 
> > capacity_max should probably include rt, deadline, and irq accounting as
> > well. Or we need both?
> I'm struggling to fully grasp your intent. We need DVFS like accounting
> for sure, and that means a current freq hook, but I'm not entirely sure
> how that relates to capacity.

We can abstract all the factors that affect current compute capacity
(frequency, P-states, big.LITTLE,...) in the scheduler by having
something like capacity_{cur,avail} to tell us how much capacity does a
particular cpu have in its current state. Assuming that implement scale
invariance for entity load tracking (we are working on that), we can
directly compare task utilization with compute capacity for balancing
decisions. For example, we can figure out how much spare capacity a cpu
has in its current state by simply:

spare_capacity(cpu) = capacity_avail(cpu) - \sum_{tasks(cpu)}^{t} util(t)

If you put more than spare_capacity(cpu) worth of task utilization on
the cpu, you will cause the cpu (and any affected cpus) to change
P-state and potentially be less energy-efficient.

Does that make any sense?

Instead of dealing with frequencies directly in the scheduler code, we
can abstract it by just having scalable compute capacity.

> > Based on your description arch_scale_freq_capacity() can't be abused to
> > implement capacity_cur (and capacity_avail) unless it is repurposed.
> > Nobody seems to implement it. Otherwise we would need something similar
> > to update capacity_cur (and capacity_avail).
> Yeah, I never got around to doing so. I started doing a APERF/MPERF SMT
> capacity thing for x86 but never finished that. The naive implementation
> suffered the same FAIL loop as above because APERF stops on idle. So
> when idle your capacity drops to nothing, leading to no new work,
> leading to more idle etc.
> I never got around to fixing that -- adding an idle filter, and ever
> since things have somewhat bitrotted.

I see.

> > As a side note, we can potentially get into a similar fail cycle already
> > due to the lack of scale invariance in the entity load tracking.
> Yah, I think that got mentioned a long while ago.

It did :-)

> > > > > In that same discussion ISTR a suggestion about adding avg_running time,
> > > > > as opposed to the current avg_runnable. The sum of avg_running should be
> > > > > much more accurate, and still react correctly to migrations.
> > > > 
> > > > I haven't look in details but I agree that avg_running would be much
> > > > more accurate than avg_runnable and should probably fit the
> > > > requirement. Does it means that we could re-add the avg_running (or
> > > > something similar) that has disappeared during the review of load avg
> > > > tracking patchset ?
> > > 
> > > Sure, I think we killed it there because there wasn't an actual use for
> > > it and I'm always in favour of stripping everything to their bare bones,
> > > esp big and complex things.
> > > 
> > > And then later, add things back once we have need for it.
> > 
> > I think it is a useful addition to the set of utilization metrics. I
> > don't think it is universally more accurate than runnable_avg. Actually
> > quite the opposite when the cpu is overloaded. But for partially loaded
> > cpus it is very useful if you don't want to factor in waiting time on
> > the rq.
> Well, different things different names. Utilization as per literature is
> simply the fraction of CPU time actually used. In that sense running_avg
> is about right for that. Our current runnable_avg is entirely different
> (as I think we all agree by now).
> But yes, for application the tipping point is u == 1, up until that
> point pure utilization makes sense, after that our runnable_avg makes
> more sense.


If you really care about latency/performance you might be interested in
comparing running_avg and runnable_avg even for u < 1. If the
running_avg/runnable_avg ratio is significantly less than one, tasks are
waiting on the rq to be scheduled.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists