lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 May 2013 23:17:21 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
	akpm@...ux-foundation.org, bp@...en8.de, pjt@...gle.com,
	namhyung@...nel.org, efault@....de, morten.rasmussen@....com
Cc:	vincent.guittot@...aro.org, preeti@...ux.vnet.ibm.com,
	viresh.kumar@...aro.org, linux-kernel@...r.kernel.org,
	alex.shi@...el.com, mgorman@...e.de, riel@...hat.com,
	wangyun@...ux.vnet.ibm.com
Subject: [patch 0/8]: use runnable load avg in balance

This patchset bases on tip/sched/core.

This version changed the runnable load avg value setting for new task
in patch 3rd.

We also tried to include blocked load avg in balance. but find many benchmark
performance dropping. Guess the too bigger cpu load drive task to be waken
on remote CPU, and cause wrong decision in periodic balance.

I retested on Intel core2, NHM, SNB, IVB, 2 and 4 sockets machines with
benchmark kbuild, aim7, dbench, tbench, hackbench, oltp, and netperf loopback
etc. The performance is better now. 

On SNB EP 4 sockets machine, the hackbench increased about 50%, and result
become stable. on other machines, hackbench increased about 2~10%.
oltp increased about 30% in NHM EX box.
netperf loopback also increased on SNB EP 4 sockets box.
no clear changes on other benchmarks.
 
Michael Wang had tested previous version on pgbench on his box:
https://lkml.org/lkml/2013/4/2/1022

And Morten tested previous version too.
http://comments.gmane.org/gmane.linux.kernel/1463371

Thanks comments from Peter, Paul, Morten, Miacheal and Preeti.
And more comments are appreciated!

Regards
Alex

[patch v6 1/8] Revert "sched: Introduce temporary FAIR_GROUP_SCHED
[patch v6 2/8] sched: move few runnable tg variables into CONFIG_SMP
[patch v6 3/8] sched: set initial value of runnable avg for new
[patch v6 4/8] sched: fix slept time double counting in enqueue
[patch v6 5/8] sched: update cpu load after task_tick.
[patch v6 6/8] sched: compute runnable load avg in cpu_load and
[patch v6 7/8] sched: consider runnable load average in move_tasks
[patch v6 8/8] sched: remove blocked_load_avg in tg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ