[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130612102019.GA6976@arm.com>
Date: Wed, 12 Jun 2013 11:20:19 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Arjan van de Ven <arjan@...ux.intel.com>
Cc: David Lang <david@...g.hm>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Ingo Molnar <mingo@...nel.org>,
Morten Rasmussen <Morten.Rasmussen@....com>,
"alex.shi@...el.com" <alex.shi@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Mike Galbraith <efault@....de>,
"pjt@...gle.com" <pjt@...gle.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linaro-kernel <linaro-kernel@...ts.linaro.org>,
"len.brown@...el.com" <len.brown@...el.com>,
"corbet@....net" <corbet@....net>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Linux PM list <linux-pm@...r.kernel.org>
Subject: Re: power-efficient scheduling design
Hi Arjan,
On Wed, Jun 12, 2013 at 02:48:58AM +0100, Arjan van de Ven wrote:
> On 6/11/2013 5:27 PM, David Lang wrote:
> > Nobody is saying that this sort of thing should be in the fastpath
> > of the scheduler.
> >
> > But if the scheduler has a table that tells it the possible states,
> > and the cost to get from the current state to each of these states
> > (and to get back and/or wake up to full power), then the scheduler
> > can make the decision on what to do, invoke a routine to make the
> > change (and in the meantime, not be fighting the change by trying to
> > schedule processes on a core that's about to be powered off), and
> > then when the change happens, the scheduler will have a new version
> > of the table of possible states and costs
> >
> > This isn't in the fastpath, it's in the rebalancing logic.
>
> the reality is much more complex unfortunately.
> C and P states hang together tightly, and even C state on one core
> impacts other cores' performance, just like P state selection on one
> core impacts other cores.
>
> (at least for x86, we should really stop talking as if the OS picks
> the "frequency", that's just not the case anymore)
I agree, the reality is very complex. But we should go back and analyse
what problem we are trying to solve, what each framework is trying to
address.
When viewed separately from the scheduler, cpufreq and cpuidle governors
do the right thing. But they both base their action on the CPU load
(balance) decided by the scheduler and it's the latter that we are
trying to adjust (and we are still debating what the right approach is).
Since such information seems too complex to be moved into the scheduler,
why don't we get cpufreq in charge of restricting the load balancing to
certain CPUs? It already tracks the load/idle time to (gradually) change
the P state. Depending on the governor/policy, it could decide that (for
example) 4 CPUs running at higher power P state are enough, telling the
scheduler to ignore the other CPUs. It won't pick a frequency, but (as
it currently does) adjust it to keep a minimal idle state on those CPUs.
If that's not longer possible (high load), it can remove the restriction
and let the scheduler use the other idle CPUs (cpufreq could even do a
direct a load_balance() call). This is a governor decision and the user
is in control of what governors are used.
Cpuidle I think for now can stay the same, gradually entering deeper
sleep states. It could be later unified with cpufreq if there are any
benefits. In deciding the load balancing restrictions, maybe cpufreq
should be aware of C-state latencies.
Cpufreq would need to get more knowledge of the power topology and
thermal management. It would still be the framework restricting the P
state or changing the load balancing restrictions to let CPUs cool down.
More hooks could be added if needed for better responsiveness (like
entering idle or task wake-up).
With the above, the scheduler will just focus on performance (given the
restrictions imposed by cpufreq) and it only needs to be aware of the
CPU topology from a performance perspective (caches, hyperthreading)
together with the cpu_power parameter for the weighted load.
--
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists