lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 23 Apr 2013 10:23:26 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Vincent Guittot <vincent.guittot@...aro.org>
CC:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	LAK <linux-arm-kernel@...ts.infradead.org>,
	"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
	Ingo Molnar <mingo@...nel.org>,
	Russell King - ARM Linux <linux@....linux.org.uk>,
	Paul Turner <pjt@...gle.com>,
	Santosh <santosh.shilimkar@...com>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Chander Kashyap <chander.kashyap@...aro.org>,
	"cmetcalf@...era.com" <cmetcalf@...era.com>,
	"tony.luck@...el.com" <tony.luck@...el.com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Len Brown <len.brown@...el.com>,
	Arjan van de Ven <arjan@...ux.intel.com>,
	Amit Kucheria <amit.kucheria@...aro.org>,
	Jonathan Corbet <corbet@....net>
Subject: Re: [RFC PATCH v3 5/6] sched: pack the idle load balance

Thanks you, Preeti and Vincent to talk the power aware scheduler for
details! believe this open discussion is helpful to conduct a a more
comprehensive solution. :)

> Hi Preeti,
> 
> I have had a look at Alex patches but i have some concerns with his patches
> -There no notion of power domain which is quite important when we speak
> about power saving IMHO. Packing tasks has got an interest if the idle
> CPUs can reach a useful low power state independently from busy CPUs.
> Architectures have different low power state capabilities which must be
> taken into account. In addition, you can have system which have CPUs
> with better power efficiency and this kind of system are not taken into
> account.

I agree with you on this point. and like what's you done to add new flag
in sched domain. It also make scheduler easy pick up new idea in balancing.
BTW, Currently, the my balance is trying pack task per SMT, maybe
packing task per cpu horse power is more compatible for other archs?

> -There are some computation of statistics on a potentially large number
> of cpus and groups at each task wake up. This overhead concerns me and
> such amount of computation should only be done when we have more time
> like the periodic load balance.

Usually, some computation is far slighter then the task migration. If
the computation is helpful to reduce future possible migration, it will
save much. On current code, I observed the fork balancing can distribute
task well in powersaving policy. That means the computation is worth.

> -There are some heuristics that will be hard to tune:
>  *powersaving balance period set as 8*max_interval
>  *power saving can do some performance load balance if there was no
> performance load balance in the last 32 balances with no more than 4
> perf balance in the last 64 balance

Do you have other tunning opinions on the numbers? I am glad to hear any
input.
>  *sched_burst_threshold

I find it is useful on 3.8 kernel when aim7 cause a very imbalance
wakeup. but now aim7 is calm down after lock-stealing RWSEM used in
kernel, maybe need to re-evaluate this on future new version.
> 
> I'm going to send a proposal for a more aggressive and scalable mode of
> my patches which will take care of my concerns. Let see how this new
> patchset can fit with Alex's ones


-- 
Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ