lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 17 Apr 2007 09:07:49 -0400
From:	James Bruce <bruce@...rew.cmu.edu>
To:	linux-kernel@...r.kernel.org
Cc:	ck@....kolivas.org
Subject:  Re: [Announce] [patch] Modular Scheduler Core and Completely Fair
   Scheduler [CFS]

Chris Friesen wrote:
> William Lee Irwin III wrote:
> 
>> The sorts of like explicit decisions I'd like to be made for these are:
>> (1) In a mixture of tasks with varying nice numbers, a given nice number
>>     corresponds to some share of CPU bandwidth. Implementations
>>     should not have the freedom to change this arbitrarily according
>>     to some intention.
> 
> The first question that comes to my mind is whether nice levels should 
> be linear or not.  I would lean towards nonlinear as it allows a wider 
> range (although of course at the expense of precision).  Maybe something 
> like "each nice level gives X times the cpu of the previous"?  I think a 
> value of X somewhere between 1.15 and 1.25 might be reasonable.

Nonlinear is a must IMO.  I would suggest X = exp(ln(10)/10) ~= 1.2589

That value has the property that a nice=10 task gets 1/10th the cpu of a 
nice=0 task, and a nice=20 task gets 1/100 of nice=0.  I think that 
would be fairly easy to explain to admins and users so that they can 
know what to expect from nicing tasks.

> What about also having something that looks at latency, and how latency 
> changes with niceness?

I think this would be a lot harder to pin down, since it's a function of 
all the other tasks running and their nice levels.  Do you have any of 
the RT-derived analysis models in mind?

> What about specifying the timeframe over which the cpu bandwidth is 
> measured?  I currently have a system where the application designers 
> would like it to be totally fair over a period of 1 second.  As you can 
> imagine, mainline doesn't do very well in this case.

It might be easier to specify the maximum deviation from the ideal 
bandwidth over a certain period.  I.e. something like "over a period of 
one second, each task receives within 10% of the expected bandwidth".

  - Jim Bruce

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ