lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1211439417.29104.7.camel@twins>
Date:	Thu, 22 May 2008 08:56:57 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Chris Friesen <cfriesen@...tel.com>
Cc:	linux-kernel@...r.kernel.org, vatsa@...ux.vnet.ibm.com,
	mingo@...e.hu, pj@....com
Subject: Re: fair group scheduler not so fair?

On Wed, 2008-05-21 at 17:59 -0600, Chris Friesen wrote:
> I just downloaded the current git head and started playing with the fair 
> group scheduler.  (This is on a dual cpu Mac G5.)
> 
> I created two groups, "a" and "b".  Each of them was left with the 
> default share of 1024.
> 
> I created three cpu hogs by doing "cat /dev/zero > /dev/null".  One hog 
> (pid 2435) was put into group "a", while the other two were put into 
> group "b".
> 
> After giving them time to settle down, "top" showed the following:
> 
> 2438 cfriesen  20   0  3800  392  336 R 99.5  0.0   4:02.82 cat 
> 
> 2435 cfriesen  20   0  3800  392  336 R 65.9  0.0   3:30.94 cat 
> 
> 2437 cfriesen  20   0  3800  392  336 R 34.3  0.0   3:14.89 cat 
> 
> 
> 
> Where pid 2435 should have gotten a whole cpu worth of time, it actually 
> only got 66% of a cpu. Is this expected behaviour?
> 
> 
> 
> I then redid the test with two hogs in one group and three hogs in the 
> other group.  Unfortunately, the cpu shares were not equally distributed 
> within each group.  Using a 10-sec interval in "top", I got the following:
> 
> 
> 2522 cfriesen  20   0  3800  392  336 R 52.2  0.0   1:33.38 cat 
> 
> 2523 cfriesen  20   0  3800  392  336 R 48.9  0.0   1:37.85 cat 
> 
> 2524 cfriesen  20   0  3800  392  336 R 37.0  0.0   1:23.22 cat 
> 
> 2525 cfriesen  20   0  3800  392  336 R 32.6  0.0   1:22.62 cat 
> 
> 2559 cfriesen  20   0  3800  392  336 R 28.7  0.0   0:24.30 cat 
> 
> 
> Do we expect to see upwards of 9% relative unfairness between processes 
> within a class?
> 
> I tried messing with the tuneables in /proc/sys/kernel 
> (sched_latency_ns, sched_migration_cost, sched_min_granularity_ns) but 
> was unable to significantly improve these results.
> 
> Any pointers would be appreciated.

What you're testing is SMP fairness of group scheduling and that code is
somewhat fresh (and has known issues - performance nr 1 amogst them) but
its quite possible it has some other issues as well.

Could you see if the patches found here:

 http://programming.kicks-ass.net/kernel-patches/sched-smp-group-fixes/

make any difference for you?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ