lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1367804711-30308-1-git-send-email-alex.shi@intel.com>
Date:	Mon,  6 May 2013 09:45:04 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
	akpm@...ux-foundation.org, bp@...en8.de, pjt@...gle.com,
	namhyung@...nel.org, efault@....de, morten.rasmussen@....com
Cc:	vincent.guittot@...aro.org, preeti@...ux.vnet.ibm.com,
	viresh.kumar@...aro.org, linux-kernel@...r.kernel.org,
	alex.shi@...el.com, mgorman@...e.de, riel@...hat.com,
	wangyun@...ux.vnet.ibm.com
Subject: [PATCH v5 0/7] use runnable load avg in load balance

This patchset bases on tip/sched/core.

It fixed a UP config bug. And the last of patch changed, it insert the
runnable load avg into effective_load(), thus the wake_affine consider 
load avg via effective_load.

I retested on Intel core2, NHM, SNB, IVB, 2 and 4 sockets machines with
benchmark kbuild, aim7, dbench, tbench, hackbench, fileio-cfq(sysbench),
and tested pthread_cond_broadcast on SNB.

The test result is similar with last version. So, clear changes is same:
On SNB EP 4 sockets machine, the hackbench increased about 50%, and result
become stable. on other machines, hackbench increased about 2~5%.
no clear performance change on other benchmarks.

Since the change is small, and my results is similar with last, guess Michael
and Morten still can keep the advantages. 

Anyway, for last version result of them, you can find:
https://lkml.org/lkml/2013/4/2/1022
Michael Wang had tested previous version on pgbench on his box:

http://comments.gmane.org/gmane.linux.kernel/1463371
Morten tested previous version with some benchmarks.
 
Thanks again for Peter's comments!

Regards!
Alex

 [PATCH v5 1/7] Revert "sched: Introduce temporary FAIR_GROUP_SCHED
 [PATCH v5 2/7] sched: remove SMP cover for runnable variables in
 [PATCH v5 3/7] sched: set initial value of runnable avg for new
 [PATCH v5 4/7] sched: update cpu load after task_tick.
 [PATCH v5 5/7] sched: compute runnable load avg in cpu_load and
 [PATCH v5 6/7] sched: consider runnable load average in move_tasks
 [PATCH v5 7/7] sched: consider runnable load average in
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ