lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 29 Apr 2007 05:55:22 -0700
From:	William Lee Irwin III <wli@...omorphy.com>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	Kasper Sandberg <lkml@...anurb.dk>, Willy Tarreau <w@....eu>,
	Ingo Molnar <mingo@...e.hu>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Gene Heskett <gene.heskett@...il.com>,
	linux-kernel@...r.kernel.org, Con Kolivas <kernel@...ivas.org>,
	Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
	Arjan van de Ven <arjan@...radead.org>,
	Peter Williams <pwil3058@...pond.net.au>, caglar@...dus.org.tr,
	Mark Lord <lkml@....ca>, Zach Carter <linux@...hcarter.com>,
	buddabrod <buddabrod@...il.com>
Subject: Re: [patch] CFS scheduler, -v6

On Sun, Apr 29, 2007 at 02:13:30PM +0200, Thomas Gleixner wrote:
> SD is a one to one replacement of the existing scheduler guts - with a
> different behaviour.
> CFS is a huge step into a modular and hierarchical scheduler design,
> which allows more than just implementing a clever scheduler for a single
> purpose. In a hierarchical scheduler you can implement resource
> management and other fancy things, in the monolitic design of the
> current scheduler (and it's proposed replacement SD) you can't. But SD
> can be made one of the modular variants.

The modularity provided is not enough to allow an implementation of
mainline, SD, or nicksched without significant core scheduler impact.

CFS doesn't have all that much to do with scheduler classes. A weak form
of them was done in tandem with the scheduler itself. The modularity
provided is sufficiently weak the advantage is largely prettiness of the
code. So essentially CFS is every bit as monolithic as mainline, SD, et
al, with some dressing that suggests modularity without actually making
any accommodations for alternative policies (e.g. reverting to mainline).

You'll hit the holes in the driver API quite quickly should you attempt
to port mainline to it. You'll hit several missing driver operations
right in schedule(), for starters. At some point you may also notice
that simple enqueue operations are not all that's there. Representing
enqueueing to active vs. expired and head vs. tail are needed for
current mainline to be representible by a set of driver operations.
It's also a bit silly to remove and re-insert a queue element for cfs
(or anything else using a tree-structured heap, which yes, a search
tree is, even if a slow one), which could use a reprioritization driver
operation, but I suppose it won't oops.

You'll also hit the same holes should you attempt to write such a
modularity patch for mainline as opposed to porting current mainline to
the driver API as-given. It takes a bit more work to get something that
actually works for all this, and it borders on disingenuity to
suggest that the scheduler class/driver API as it now stands is
capable of any such thing as porting current mainline, nicksched, or SD
to it without significant code impact to the core scheduler code.

So on both these points, I don't see cfs as being adequate as it now
stands for a modular, hierarchical scheduler design. If we want a truly
modular and hierarchical scheduler design, I'd suggest pursuing it
directly and independently of policy, and furthermore considering the
representability of various policies in the scheduling class/driver API
as a test of its adequacy.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ