lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 21 May 2008 17:59:22 -0600
From:	"Chris Friesen" <cfriesen@...tel.com>
To:	linux-kernel@...r.kernel.org, vatsa@...ux.vnet.ibm.com,
	mingo@...e.hu, a.p.zijlstra@...llo.nl, pj@....com
Subject: fair group scheduler not so fair?

I just downloaded the current git head and started playing with the fair 
group scheduler.  (This is on a dual cpu Mac G5.)

I created two groups, "a" and "b".  Each of them was left with the 
default share of 1024.

I created three cpu hogs by doing "cat /dev/zero > /dev/null".  One hog 
(pid 2435) was put into group "a", while the other two were put into 
group "b".

After giving them time to settle down, "top" showed the following:

2438 cfriesen  20   0  3800  392  336 R 99.5  0.0   4:02.82 cat 

2435 cfriesen  20   0  3800  392  336 R 65.9  0.0   3:30.94 cat 

2437 cfriesen  20   0  3800  392  336 R 34.3  0.0   3:14.89 cat 



Where pid 2435 should have gotten a whole cpu worth of time, it actually 
only got 66% of a cpu. Is this expected behaviour?



I then redid the test with two hogs in one group and three hogs in the 
other group.  Unfortunately, the cpu shares were not equally distributed 
within each group.  Using a 10-sec interval in "top", I got the following:


2522 cfriesen  20   0  3800  392  336 R 52.2  0.0   1:33.38 cat 

2523 cfriesen  20   0  3800  392  336 R 48.9  0.0   1:37.85 cat 

2524 cfriesen  20   0  3800  392  336 R 37.0  0.0   1:23.22 cat 

2525 cfriesen  20   0  3800  392  336 R 32.6  0.0   1:22.62 cat 

2559 cfriesen  20   0  3800  392  336 R 28.7  0.0   0:24.30 cat 


Do we expect to see upwards of 9% relative unfairness between processes 
within a class?

I tried messing with the tuneables in /proc/sys/kernel 
(sched_latency_ns, sched_migration_cost, sched_min_granularity_ns) but 
was unable to significantly improve these results.

Any pointers would be appreciated.

Thanks,

Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ