lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 Apr 2007 17:38:52 +0300
From:	Al Boldi <a1426z@...ab.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	William Lee Irwin III <wli@...omorphy.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair

Ingo Molnar wrote:
> * William Lee Irwin III <wli@...omorphy.com> wrote:
> > On Sun, Apr 15, 2007 at 09:57:48PM +0200, Ingo Molnar wrote:
> > > Oh I was very much testing "CPU bandwidth allocation as influenced by
> > > nice numbers" - it's one of the basic things i do when modifying the
> > > scheduler. An automated tool, while nice (all automation is nice)
> > > wouldnt necessarily show such bugs though, because here too it needed
> > > thousands of running tasks to trigger in practice. Any volunteers? ;)
> >
> > Worse comes to worse I might actually get around to doing it myself.
> > Any more detailed descriptions of the test for a rainy day?
>
> the main complication here is that the handling of nice levels is still
> typically a 2nd or 3rd degree design factor when writing schedulers. The
> reason isnt carelessness, the reason is simply that users typically only
> care about a single nice level: the one that all tasks run under by
> default.
>
> Also, often there's just one or two good ways to attack the problem
> within a given scheduler approach and the quality of nice levels often
> suffers under other, more important design factors like performance.
>
> This means that for example for the vanilla scheduler the distribution
> of CPU power depends on HZ, on the number of tasks and on the scheduling
> pattern. The distribution of CPU power amongst nice levels is basically
> a function of _everything_. That makes any automated test pretty
> challenging. Both with SD and with CFS there's a good chance to actually
> formalize the meaning of nice levels, but i'd not go as far as to
> mandate any particular behavior by rigidly saying "pass this automated
> tool, else ...", other than "make nice levels resonable". All the other
> more formal CPU resource limitation techniques are then a matter of
> CKRM-alike patches, which offer much more finegrained mechanisms than
> pure nice levels anyway.
>
> so to answer your question: it's pretty much freely defined. Make up
> your mind about it and figure out the ways how people use nice levels
> and think about which aspects of that experience are worth testing for
> intelligently.

I think this post, http://lkml.org/lkml/2007/4/15/5, shows the problem 
clearly with only 3 tasks running.


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ