lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 10 May 2016 17:26:05 +0200
From:	Mike Galbraith <umgwanakikbuti@...il.com>
To:	Yuyang Du <yuyang.du@...el.com>
Cc:	Peter Zijlstra <peterz@...radead.org>, Chris Mason <clm@...com>,
	Ingo Molnar <mingo@...nel.org>,
	Matt Fleming <matt@...eblueprint.co.uk>,
	linux-kernel@...r.kernel.org
Subject: Re: sched: tweak select_idle_sibling to look for idle threads

On Tue, 2016-05-10 at 09:49 +0200, Mike Galbraith wrote:

>  Only whacking
> cfs_rq_runnable_load_avg() with a rock makes schbench -m <sockets> -t
> <near socket size> -a work well.  'Course a rock in its gearbox also
> rendered load balancing fairly busted for the general case :)

Smaller rock doesn't injure heavy tbench, but more importantly, still
demonstrates the issue when you want full spread.

schbench -m4 -t38 -a

cputime 30000 threads 38 p99 177
cputime 30000 threads 39 p99 10160

LB_TIP_AVG_HIGH
cputime 30000 threads 38 p99 193
cputime 30000 threads 39 p99 184
cputime 30000 threads 40 p99 203
cputime 30000 threads 41 p99 202
cputime 30000 threads 42 p99 205
cputime 30000 threads 43 p99 218
cputime 30000 threads 44 p99 237
cputime 30000 threads 45 p99 245
cputime 30000 threads 46 p99 262
cputime 30000 threads 47 p99 296
cputime 30000 threads 48 p99 3308

47*4+4=nr_cpus yay

---
 kernel/sched/fair.c     |    3 +++
 kernel/sched/features.h |    1 +
 2 files changed, 4 insertions(+)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3027,6 +3027,9 @@ void remove_entity_load_avg(struct sched
 
 static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq)
 {
+	if (sched_feat(LB_TIP_AVG_HIGH) && cfs_rq->load.weight > cfs_rq->runnable_load_avg*2)
+		return cfs_rq->runnable_load_avg + min_t(unsigned long, NICE_0_LOAD,
+							 cfs_rq->load.weight/2);
 	return cfs_rq->runnable_load_avg;
 }
 
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -67,6 +67,7 @@ SCHED_FEAT(RT_PUSH_IPI, true)
 SCHED_FEAT(FORCE_SD_OVERLAP, false)
 SCHED_FEAT(RT_RUNTIME_SHARE, true)
 SCHED_FEAT(LB_MIN, false)
+SCHED_FEAT(LB_TIP_AVG_HIGH, false)
 SCHED_FEAT(ATTACH_AGE_LOAD, true)
 
 SCHED_FEAT(OLD_IDLE, false)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ