lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4626D1D5.4040508@bigpond.net.au>
Date:	Thu, 19 Apr 2007 12:20:05 +1000
From:	Peter Williams <pwil3058@...pond.net.au>
To:	Ingo Molnar <mingo@...e.hu>
CC:	Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
	Con Kolivas <kernel@...ivas.org>, ck list <ck@....kolivas.org>,
	Bill Huey <billh@...ppy.monkey.org>,
	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair
 Scheduler [CFS]

Ingo Molnar wrote:
> * Peter Williams <pwil3058@...pond.net.au> wrote:
> 
>>> And my scheduler for example cuts down the amount of policy code and 
>>> code size significantly.
>> Yours is one of the smaller patches mainly because you perpetuate (or 
>> you did in the last one I looked at) the (horrible to my eyes) dual 
>> array (active/expired) mechanism.  That this idea was bad should have 
>> been apparent to all as soon as the decision was made to excuse some 
>> tasks from being moved from the active array to the expired array.  
>> This essentially meant that there would be circumstances where extreme 
>> unfairness (to the extent of starvation in some cases) -- the very 
>> things that the mechanism was originally designed to ensure (as far as 
>> I can gather).  Right about then in the development of the O(1) 
>> scheduler alternative solutions should have been sought.
> 
> in hindsight i'd agree.

Hindsight's a wonderful place isn't it :-) and, of course, it's where I 
was making my comments from.

> But back then we were clearly not ready for 
> fine-grained accurate statistics + trees (cpus are alot faster at more 
> complex arithmetics today, plus people still believed that low-res can 
> be done well enough),  and taking out any of these two concepts from CFS
> would result in a similarly complex runqueue implementation.

I disagree.  The single priority array with a promotion mechanism that I 
use in the SPA schedulers can do the job of avoiding starvation with no 
measurable increase in the overhead.  Fairness, nice, good interactive 
responsiveness can then be managed by how you determine tasks' dynamic 
priorities.

> Also, the 
> array switch was just thought to be of another piece of 'if the 
> heuristics go wrong, we fall back to an array switch' logic, right in 
> line with the other heuristics. And you have to accept it, mainline's 
> ability to auto-renice make -j jobs (and other CPU hogs) was quite a 
> plus for developers, so it had (and probably still has) quite some 
> inertia.

I agree, it wasn't totally useless especially for the average user.  My 
main problem with it was that the effect of "nice" wasn't consistent or 
predictable enough for reliable resource allocation.

I also agree with the aims of the various heuristics i.e. you have to be 
unfair and give some tasks preferential treatment in order to give the 
users the type of responsiveness that they want.  It's just a shame that 
it got broken in the process but as you say it's easier to see these 
things in hindsight than in the middle of the melee.

Peter
-- 
Peter Williams                                   pwil3058@...pond.net.au

"Learning, n. The kind of ignorance distinguishing the studious."
  -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ