lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070509180205.GA27462@in.ibm.com>
Date:	Wed, 9 May 2007 23:32:05 +0530
From:	Srivatsa Vaddagiri <vatsa@...ibm.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Mike Galbraith <efault@....de>, linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Con Kolivas <kernel@...ivas.org>,
	Nick Piggin <npiggin@...e.de>,
	Arjan van de Ven <arjan@...radead.org>,
	Peter Williams <pwil3058@...pond.net.au>,
	Thomas Gleixner <tglx@...utronix.de>, caglar@...dus.org.tr,
	Willy Tarreau <w@....eu>,
	Gene Heskett <gene.heskett@...il.com>, Mark Lord <lkml@....ca>,
	tingy@...umass.edu, tong.n.li@...el.com
Subject: Definition of fairness (was Re: [patch] CFS scheduler, -v11)

On Tue, May 08, 2007 at 05:04:31PM +0200, Ingo Molnar wrote:
> thanks Mike - value 0x8 looks pretty good here and doesnt have the 
> artifacts you found. I've done a quick -v11 release with that fixed, 
> available at the usual place:
> 
>     http://people.redhat.com/mingo/cfs-scheduler/
> 
> with no other changes.

Ingo,
	I had a question with respect to the definition of fairness used, esp
for tasks that are not 100% cpu hogs.

Ex: consider two equally important tasks T1 and T2 running on same CPU and 
whose execution nature is:

	T1 = 100% cpu hog
	T2 = 60% cpu hog (run for 600ms, sleep for 400ms)

Over a arbitrary observation period of 10 sec, 

	T1 was ready to run for all 10sec
	T2 was ready to run for 6 sec

Over this observation period, how much execution time should T2 get,
under a "fair" scheduler?

I was expecting both T2 and T1 to get 5 sec (50:50 split). Is this a
wrong expectation of fairness?

Anyway, results of this experiment (using testcase attached) is below.
T2 gets way below its fair share IMO (under both cfs and sd).


2.6.21.1:

 5444 vatsa     16   0  2468  460  388 R   59  0.0   0:19.76 3 T1
 5443 vatsa     25   0  2468  460  388 R   40  0.0   0:15.36 3 T2


2.6.21.1 + cfs-v11:

 5460 vatsa     31   0  2464  460  388 R   70  0.0   0:15.28 3 T1
 5461 vatsa     29   0  2468  460  388 R   30  0.0   0:05.65 3 T2


2.6.21 + sd-0.48:

 5459 vatsa     23   0  2468  460  388 R   70  0.0   0:17.02 3 T1
 5460 vatsa     21   0  2464  460  388 R   30  0.0   0:06.21 3 T2


Note: 

T1 is started as ./cpuhog 600 0 10 > /dev/null &
T2 is started as ./cpuhog 600 400 10 > /dev/null &

First arg = runtime in ms
Second arg = sleeptime in ms
Third arg = Observation period in seconds


-- 
Regards,
vatsa

View attachment "cpuhog.c" of type "text/plain" (1932 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ