lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <514FD801.8070403@intel.com>
Date:	Mon, 25 Mar 2013 12:52:17 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>
CC:	mingo@...hat.com, peterz@...radead.org, efault@....de,
	torvalds@...ux-foundation.org, tglx@...utronix.de,
	akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
	pjt@...gle.com, namhyung@...nel.org, vincent.guittot@...aro.org,
	gregkh@...uxfoundation.org, viresh.kumar@...aro.org,
	linux-kernel@...r.kernel.org, morten.rasmussen@....com
Subject: Re: [patch v5 14/15] sched: power aware load balance

On 03/22/2013 01:14 PM, Preeti U Murthy wrote:
>> > 
>> > the value get from decay_load():
>> >  sa->runnable_avg_sum = decay_load(sa->runnable_avg_sum,
>> > in decay_load it is possible to be set zero.
> Yes you are right, it is possible to be set to 0, but after a very long
> time, to be more precise, nearly 2 seconds. If you look at decay_load(),
> if the period between last update and now has crossed (32*63),only then
> will the runnable_avg_sum become 0, else it will simply decay.
> 
> This means that for nearly 2seconds,consolidation of loads may not be
> possible even after the runqueues have finished executing tasks running
> on them.

Look into the decay_load(), since the LOAD_AVG_MAX is about 47742, so
after 16 * 32ms, the maximum avg sum will be decay to zero. 2^16 = 65536

Yes, compare to accumulate time 345ms, the decay is not symmetry, and
not precise, seems it has space to tune well. But it is acceptable now.
> 
> The exact experiment that I performed was running ebizzy, with just two
> threads. My setup was 2 socket,2 cores each,4 threads each core. So a 16
> logical cpu machine.When I begin running ebizzy with balance policy, the
> 2 threads of ebizzy are found one on each socket, while I would expect
> them to be on the same socket. All other cpus, except the ones running
> ebizzy threads are idle and not running anything on either socket.
> I am not running any other processes.

did you try the simplest benchmark: while true; do :; done
I am writing the v6 version which include rt_util etc. you may test on
it after I send out. :)
> 
> You could run a similar experiment and let me know if you see otherwise.
> I am at a loss to understand why else would such a spreading of load
> occur, if not for the rq->util not becoming 0 quickly,when it is not
> running anything. I have used trace_printks to keep track of runqueue
> util of those runqueues not running anything after maybe some initial
> load and it does not become 0 till the end of the run.


-- 
Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ