lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 17 Apr 2007 09:33:08 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	William Lee Irwin III <wli@...omorphy.com>
Cc:	Davide Libenzi <davidel@...ilserver.org>,
	Nick Piggin <npiggin@...e.de>,
	Peter Williams <pwil3058@...pond.net.au>,
	Mike Galbraith <efault@....de>,
	Con Kolivas <kernel@...ivas.org>, ck list <ck@....kolivas.org>,
	Bill Huey <billh@...ppy.monkey.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]


* William Lee Irwin III <wli@...omorphy.com> wrote:

> On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
> > I had a quick look at Ingo's code yesterday. Ingo is always smart to 
> > prepare a main dish (feature) with a nice sider (code cleanup) to 
> > Linus ;) And even this code does that pretty nicely. The deadline 
> > designs looks good, although I think the final "key" calculation 
> > code will end up quite different from what it looks now.
> 
> The additive nice_offset breaks nice levels. A multiplicative priority 
> weighting of a different, nonnegative metric of cpu utilization from 
> what's now used is required for nice levels to work. I've been trying 
> to point this out politely by strongly suggesting testing whether nice 
> levels work.

granted, CFS's nice code is still incomplete, but you err quite 
significantly with this extreme statement that they are "broken".

nice levels certainly work to a fair degree even in the current code and 
much of the focus is elsewhere - just try it. (In fact i claim that 
CFS's nice levels often work _better_ than the mainline scheduler's nice 
level support, for the testcases that matter to users.)

The precise behavior of nice levels, as i pointed it out in previous 
mails, is largely 'uninteresting' and it has changed multiple times in 
the past 10 years.

What matters to users is mainly: whether X reniced to -10 does get 
enough CPU time and whether stuff reniced to +19 doesnt take away too 
much CPU time from the rest of the system. _How_ a Linux scheduler 
achieves this is an internal matter and certainly CFS does it in a hacky 
way at the moment.

All the rest, 'CPU bandwidth utilization' or whatever abstract metric we 
could come up with is just a fancy academic technicality that has no 
real significance to any of the testers who are trying CFS right now.

Sure we prefer final solutions that are clean and make sense (because 
such things are the easiest to maintain long-term), and often such final 
solutions are quite close to academic concepts, and i think Davide 
correctly observed this by saying that "the final key calculation code 
will end up quite different from what it looks now", but your 
extreme-end claim of 'breakage' for something that is just plain 
incomplete is not really a fair characterisation at this point.

Anyone who thinks that there exists only two kinds of code: 100% correct 
and 100% incorrect with no shades of grey inbetween is in reality a sort 
of an extremist: whom, depending on mood and affection, we could call 
either a 'coding purist' or a 'coding taliban' ;-)

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ