lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 19 Apr 2007 10:49:12 +1000
From:	Peter Williams <pwil3058@...pond.net.au>
To:	Chris Friesen <cfriesen@...tel.com>
CC:	Mark Glines <mark@...nes.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Matt Mackall <mpm@...enic.com>, Nick Piggin <npiggin@...e.de>,
	Bill Huey <billh@...ppy.monkey.org>,
	Mike Galbraith <efault@....de>,
	William Lee Irwin III <wli@...omorphy.com>,
	linux-kernel@...r.kernel.org, ck list <ck@....kolivas.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Arjan van de Ven <arjan@...radead.org>
Subject: Re: [ck] Re: [Announce] [patch] Modular Scheduler Core and Completely
 Fair Scheduler [CFS]

Chris Friesen wrote:
> Mark Glines wrote:
> 
>> One minor question: is it even possible to be completely fair on SMP?
>> For instance, if you have a 2-way SMP box running 3 applications, one of
>> which has 2 threads, will the threaded app have an advantage here?  (The
>> current system seems to try to keep each thread on a specific CPU, to
>> reduce cache thrashing, which means threads and processes alike each
>> get 50% of the CPU.)
> 
> I think the ideal in this case would be to have both threads on one cpu, 
> with the other app on the other cpu.  This gives inter-process fairness 
> while minimizing the amount of task migration required.

Solving this sort of issue was one of the reasons for the smpnice patches.

> 
> More interesting is the case of three processes on a 2-cpu system.  Do 
> we constantly migrate one of them back and forth to ensure that each of 
> them gets 66% of a cpu?

Depends how keen you are on fairness.  Unless the process are long term 
continuously active tasks that never sleep it's probably not an issue as 
they'll probably move around enough in the long term for them each to 
get 66% over the long term.

Exact load balancing for real work loads (where tasks are coming and 
going, sleeping and waking semi randomly and over relatively brief 
periods) is probably unattainable because by the time you've work out 
the ideal placement of the currently runnable tasks on the available 
CPUs it's all changed and the solution is invalid.  The best you can 
hope for that change isn't so great as to completely invalidate the 
solution and the changes you make as a result are an improvement on the 
current allocation of processes to CPUs.

The above probably doesn't hold for some systems such as those large 
super computer jobs that run for several days but they're probably best 
served by explicit allocation of processes to CPUs using the process 
affinity mechanism.

Peter
-- 
Peter Williams                                   pwil3058@...pond.net.au

"Learning, n. The kind of ignorance distinguishing the studious."
  -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ