lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1306111722470.24968@nftneq.ynat.uz>
Date:	Tue, 11 Jun 2013 17:27:23 -0700 (PDT)
From:	David Lang <david@...g.hm>
To:	Daniel Lezcano <daniel.lezcano@...aro.org>
cc:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	Catalin Marinas <catalin.marinas@....com>,
	Ingo Molnar <mingo@...nel.org>,
	Morten Rasmussen <Morten.Rasmussen@....com>,
	"alex.shi@...el.com" <alex.shi@...el.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Mike Galbraith <efault@....de>,
	"pjt@...gle.com" <pjt@...gle.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linaro-kernel <linaro-kernel@...ts.linaro.org>,
	"arjan@...ux.intel.com" <arjan@...ux.intel.com>,
	"len.brown@...el.com" <len.brown@...el.com>,
	"corbet@....net" <corbet@....net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linux PM list <linux-pm@...r.kernel.org>
Subject: Re: power-efficient scheduling design

On Mon, 10 Jun 2013, Daniel Lezcano wrote:

> Some SoC can have a cluster of cpus sharing some resources, eg cache, so
> they must enter the same state at the same moment. Beside the
> synchronization mechanisms, that adds a dependency with the next event.
> For example, the u8500 board has a couple of cpus. In order to make them
> to enter in retention, both must enter the same state, but not necessary
> at the same moment. The first cpu will wait in WFI and the second one
> will initiate the retention mode when entering to this state.
> Unfortunately, some time could have passed while the second cpu entered
> this state and the next event for the first cpu could be too close, thus
> violating the criteria of the governor when it choose this state for the
> second cpu.
>
> Also the latencies could change with the frequencies, so there is a
> dependency with cpufreq, the lesser the frequency is, the higher the
> latency is. If the scheduler takes the decision to go to a specific
> state assuming the exit latency is a given duration, if the frequency
> decrease, this exit latency could increase also and lead the system to
> be less responsive.
>
> I don't know, how were made the latencies computation (eg. worst case,
> taken with the lower frequency or not) but we have just one set of
> values. That should happen with the current code.
>
> Another point is the timer allowing to detect bad decision and go to a
> deep idle state. With the cluster dependency described above, we may
> wake up a particular cpu, which turns on the cluster and make the entire
> cluster to wake up in order to enter a deeper state, which could fail
> because of the other cpu may not fulfill the constraint at this moment.

Nobody is saying that this sort of thing should be in the fastpath of the 
scheduler.

But if the scheduler has a table that tells it the possible states, and the cost 
to get from the current state to each of these states (and to get back and/or 
wake up to full power), then the scheduler can make the decision on what to do, 
invoke a routine to make the change (and in the meantime, not be fighting the 
change by trying to schedule processes on a core that's about to be powered 
off), and then when the change happens, the scheduler will have a new version of 
the table of possible states and costs

This isn't in the fastpath, it's in the rebalancing logic.

David Lang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ