lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070415204824.GA25813@elte.hu>
Date:	Sun, 15 Apr 2007 22:48:24 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Matt Mackall <mpm@...enic.com>
Cc:	Con Kolivas <kernel@...ivas.org>,
	Peter Williams <pwil3058@...pond.net.au>,
	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
	Arjan van de Ven <arjan@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]


* Matt Mackall <mpm@...enic.com> wrote:

> Look at what happened with I/O scheduling. Opening things up to some 
> new ideas by making it possible to select your I/O scheduler took us 
> from 10 years of stagnation to healthy, competitive development, which 
> gave us a substantially better I/O scheduler.

actually, 2-3 years ago we already had IO schedulers, and my opinion 
against plugsched back then (also shared by Nick and Linus) was very 
much considering them. There are at least 4 reasons why I/O schedulers 
are different from CPU schedulers:

1) CPUs are a non-persistent resource shared by _all_ tasks and 
   workloads in the system. Disks are _persistent_ resources very much 
   attached to specific workloads. (If tasks had to be 'persistent' to
   the CPU they were started on we'd have much different scheduling
   technology, and there would be much less complexity.) More analogous 
   to CPU schedulers would perhaps be VM/MM schedulers, and those tend 
   to be hard to modularize in a technologically sane way too. (and 
   unlike disks there's no good generic way to attach VM/MM schedulers 
   to particular workloads.) So it's apples to oranges.

   in practice it comes down to having one good scheduler that runs all 
   workloads on a system reasonably well. And given that a very large 
   portion of system runs mixed workloads, the demand for one good 
   scheduler is pretty high. While i can run with mixed IO schedulers 
   just fine.

2) plugsched did not allow on the fly selection of schedulers, nor did
   it allow a per CPU selection of schedulers. IO schedulers you can 
   change per disk, on the fly, making them much more useful in
   practice. Also, IO schedulers (while definitely not being slow!) are 
   alot less performance sensitive than CPU schedulers.

3) I/O schedulers are pretty damn clean code, and plugsched, at least
   the last version i saw of it, didnt come even close.

4) the good thing that happened to I/O, after years of stagnation isnt
   I/O schedulers. The good thing that happened to I/O is called Jens
   Axboe. If you care about the I/O subystem then print that name out 
   and hang it on the wall. That and only that is what mattered.

all in one, while there are definitely uses (embedded would like to have 
a smaller/different scheduler, etc.), the technical case for 
modularization for the sake of selectability is alot lower for CPU 
schedulers than it is for I/O schedulers.

nor was the non-modularity of some piece of code ever an impediment to 
competition. May i remind you of the pretty competitive SLAB allocator 
landscape, resulting in things like the SLOB allocator, written by 
yourself? ;-)

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ