lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140121145708.GY31570@twins.programming.kicks-ass.net>
Date:	Tue, 21 Jan 2014 15:57:08 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Alex Shi <alex.shi@...aro.org>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...nel.org>, Mike Galbraith <efault@....de>,
	Daniel Lezcano <daniel.lezcano@...aro.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Amit Kucheria <amit.kucheria@...aro.org>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: top-down balance purpose discussion -- resend

On Tue, Jan 21, 2014 at 10:04:26PM +0800, Alex Shi wrote:
> 
> Current scheduler load balance is bottom-up mode, each CPU need
> initiate the balance by self.
> 
> 1, Like in a integrate computer system, it has smt/core/cpu/numa, 4
> level scheduler domains. If there is just 2 tasks in whole system that
> both running on cpu0. Current load balance need to pull task to another
> smt in smt domain, then pull task to another core, then pull task to
> another cpu, finally pull task to another numa. Totally it is need 4
> times task moving to get system balance.

Except the idle load balancer, and esp. the newidle can totally by-pass
this.

If you do the packing right in the newidle pass, you'd get there in 1
step.

> Generally, the task moving complexity is
> 	O(nm log n),  n := nr_cpus, m := nr_tasks
> 
> There is a excellent summary and explanation for this in
> kernel/sched/fair.c:4605

Which is a perfectly fine scheme for a busy system.

> Another weakness of current LB is that every cpu need to get the other
> cpus' load info repeatedly and try to figure out busiest sched
> group/queue on every sched domain level. But it just waste time, since
> it may not conduct a task moving. One of reasons is that cpu can only
> pull task, not pushing.

This doesn't make sense.. and in fact, we do a limited amount of 3rd
party movements.

Whatever you do, you have to repeat the information gathering anyhow,
because it constantly changes.

Trying to serialize that doesn't make any kind of sense. The only thing
you want is that the system converges.

Skipped the rest because it seems build on a fundament I don't agree
with. That 4 move thing is just silly for an idle system, and we
shouldn't do that.

I also very much do not want a single CPU balancing the entire system,
that's the anti-thesis of scalable.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ