[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140714162249.GE9918@twins.programming.kicks-ass.net>
Date: Mon, 14 Jul 2014 18:22:49 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Morten Rasmussen <morten.rasmussen@....com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Russell King - ARM Linux <linux@....linux.org.uk>,
LAK <linux-arm-kernel@...ts.infradead.org>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
Mike Galbraith <efault@....de>,
Nicolas Pitre <nicolas.pitre@...aro.org>,
"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Dietmar Eggemann <Dietmar.Eggemann@....com>
Subject: Re: [PATCH v3 09/12] Revert "sched: Put rq's sched_avg under
CONFIG_FAIR_GROUP_SCHED"
On Mon, Jul 14, 2014 at 03:04:35PM +0100, Morten Rasmussen wrote:
> > I'm struggling to fully grasp your intent. We need DVFS like accounting
> > for sure, and that means a current freq hook, but I'm not entirely sure
> > how that relates to capacity.
>
> We can abstract all the factors that affect current compute capacity
> (frequency, P-states, big.LITTLE,...) in the scheduler by having
> something like capacity_{cur,avail} to tell us how much capacity does a
> particular cpu have in its current state. Assuming that implement scale
> invariance for entity load tracking (we are working on that), we can
> directly compare task utilization with compute capacity for balancing
> decisions. For example, we can figure out how much spare capacity a cpu
> has in its current state by simply:
>
> spare_capacity(cpu) = capacity_avail(cpu) - \sum_{tasks(cpu)}^{t} util(t)
>
> If you put more than spare_capacity(cpu) worth of task utilization on
> the cpu, you will cause the cpu (and any affected cpus) to change
> P-state and potentially be less energy-efficient.
>
> Does that make any sense?
>
> Instead of dealing with frequencies directly in the scheduler code, we
> can abstract it by just having scalable compute capacity.
Ah, ok. Same thing then.
> > But yes, for application the tipping point is u == 1, up until that
> > point pure utilization makes sense, after that our runnable_avg makes
> > more sense.
>
> Agreed.
>
> If you really care about latency/performance you might be interested in
> comparing running_avg and runnable_avg even for u < 1. If the
> running_avg/runnable_avg ratio is significantly less than one, tasks are
> waiting on the rq to be scheduled.
Indeed, that gives a measure of queueing.
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists