[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <11389655.8tCmxgKokW@vostro.rjw.lan>
Date: Mon, 05 May 2014 02:32:31 +0200
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: Daniel Lezcano <daniel.lezcano@...aro.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Amit Kucheria <amit.kucheria@...aro.org>,
Ingo Molnar <mingo@...e.hu>,
Lists linaro-kernel <linaro-kernel@...ts.linaro.org>,
Linux PM list <linux-pm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/3] sched: idle: Add sched balance option
On Tuesday, April 29, 2014 12:25:39 PM Daniel Lezcano wrote:
> On 04/29/2014 01:11 AM, Rafael J. Wysocki wrote:
> > On Monday, April 28, 2014 01:07:31 PM Daniel Lezcano wrote:
[cut]
> > In my opinion it would be much better to have a knob representing the current
> > relative value of energy to the user (which may depend on things like whether
> > or not the system is on battery etc) and meaning how far we need to go with
> > energy saving efforts.
> >
> > So if that knob is 0, we'll do things that are known-good for performance.
> > If it is 1, we'll do some extra effort to save enery as well possibly at
> > a small expense of performance if that's necessary. If it is 100, we'll do
> > all we can to save as much energy as possible without caring about performance
> > at all.
> >
> > And it doesn't even have to be scheduler-specific, it very well may be global.
>
> That would be very nice but I don't see how we can quantify this energy
> and handle that generically from the kernel for all the hardware.
>
> I am pretty sure we will discover for some kind of hardware a specific
> option will consume more power, argh ! energy I mean, than another
> hardware because of the architecture.
>
> From my personal experience, when we are facing this kind of complexity
> and heuristic, it is the sign the userspace has some work to do.
>
> What I am proposing is not in contradiction with your approach, it is
> about exporting a lot of knobs to userspace, and the userspace decide
> how to map what is '0' <--> '100' regarding these options. Nothing
> prevent the different platform to set a default value for these options.
Our experience so far, however, is that user space is not really likely to
change the default values of such knobs.
> From my POV, the cgroup could be a good solution for that for different
> reasons. Especially one good reason is we can stick the energy policy
> per task instead of the entire system.
>
> Let's imagine the following scenario:
>
> An user has a laptop running a mailer looking for the email every 5
> minutes. The system switched to 'power'. The user wants to play a video
> game but due to the 'power' policy, the game is not playable so it
> forces the policy to 'performance'. All the tasks will use the
> 'performance' policy, thus consuming more energy.
This isn't all about the given task, but also about devices that are in use
while it is being run. For example, to play a game the user would probably
like the input subsystem to be more responsive and the screen to be brighter
etc. If that is a network game, the network adapter will probably need to
work in the "performance" mode too.
> If we do per task, the video game will use the 'performance' policy and
> the other tasks on the system will use the 'power' policy. The userspace
> can take the decision to freeze the application running 'performance' if
> we reach a critical battery level.
Well, consider task packing in that context and suppose that according to the
current policy task packing should be applied to "energy efficient" tasks, but
not to "performance" tasks. Now, suppose that there's an "energy efficient"
task to run and there's a core occupied by a "performace" task. Should the
"energy efficient" task be run on that core or should we find another one
for it? Who's more important in that case?
> The cgroup is a good framework to do that and gives a lot of flexibility
> to userspace. I understood Peter does not like the cgroup but I did not
> give up to convince him, the cgroup could be good solution :)
Will user space actually use that flexibility?
> Looking forward, if the energy policy is tied with the task, in the
> future we can normalize the energy consumption and stick to an 'energy
> load' per task and reuse the load tracking for energy, do per task
> energy accounting, nice per energy, etc ...
That can be done, but some parts of the kernel that may benefit from an
"energy conservation bias" knob are not tied to any particular task. The
block layer may be one example here.
> Going back to reality, concretely this sysctl patch did not reach a
> consensus. So I will resend the two other patches, hoping the discussion
> will lead to an agreement.
Well, the discussion so far has been useful to me anyway. :-)
Thanks!
--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists