lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46A672F8.2040305@redhat.com>
Date:	Tue, 24 Jul 2007 17:45:28 -0400
From:	Chris Snook <csnook@...hat.com>
To:	Chris Friesen <cfriesen@...tel.com>
CC:	"Li, Tong N" <tong.n.li@...el.com>, mingo@...e.hu,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC] scheduler: improve SMP fairness in CFS

Chris Friesen wrote:
> Chris Snook wrote:
> 
>> I don't think Chris's scenario has much bearing on your patch.  What 
>> he wants is to have a task that will always be running, but can't 
>> monopolize either CPU. This is useful for certain realtime workloads, 
>> but as I've said before, realtime requires explicit resource 
>> allocation.  I don't think this is very relevant to SCHED_FAIR balancing.
> 
> I'm not actually using the scenario I described, its just sort of a 
> worst-case load-balancing thought experiment.
> 
> What we want to be able to do is to specify a fraction of each cpu for 
> each task group.  We don't want to have to affine tasks to particular cpus.

A fraction of *each* CPU, or a fraction of *total* CPU?  Per-cpu granularity 
doesn't make anything more fair.  You've got a big bucket of MIPS you want to 
divide between certain groups, but it shouldn't make a difference which CPUs 
those MIPS come from, other than the fact that we try to minimize overhead 
induced by migration.

> This means that the load balancer must be group-aware, and must trigger 
> a re-balance (possibly just for a particular group) as soon as the cpu 
> allocation for that group is used up on a particular cpu.

If I have two threads with the same priority, and two CPUs, the scheduler will 
put one on each CPU, and they'll run happily without any migration or balancing. 
  It sounds like you're saying that every X milliseconds, you want both to 
expire, be forbidden from running on the current CPU for the next X 
milliseconds, and then migrated to the other CPU.  There's no gain in fairness 
here, and there's a big drop in performance.

I suggested local fairness as a means to achieve global fairness because it 
could reduce overhead, and by adding the margin of error at each level in the 
locality hierarchy, you can get an algorithm which naturally tolerates the level 
of unfairness beyond which it is impossible to optimize.  Strict local fairness 
for its own sake doesn't accomplish anything that's better than global fairness.

	-- Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ