lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <483DA5E7.5050600@nortel.com>
Date:	Wed, 28 May 2008 12:35:19 -0600
From:	"Chris Friesen" <cfriesen@...tel.com>
To:	vatsa@...ux.vnet.ibm.com
CC:	linux-kernel@...r.kernel.org, mingo@...e.hu,
	a.p.zijlstra@...llo.nl, pj@....com,
	Balbir Singh <balbir@...ibm.com>,
	aneesh.kumar@...ux.vnet.ibm.com, dhaval@...ux.vnet.ibm.com
Subject: Re: fair group scheduler not so fair?

Srivatsa Vaddagiri wrote:

> We seem to be skipping the last element in the task list always. In your
> case, the lone task in Group a/b is always skipped because of this.

> Updated patch (on top of 2.6.26-rc3 +
> http://programming.kicks-ass.net/kernel-patches/sched-smp-group-fixes/)
> below.  Pls let me know how it fares!

Looking much better, but still some fairness issues with more complex 
setups.

pid 2477 in A, others in B
2477	99.5%
2478	49.9%
2479	49.9%

move 2478 to A
2479	99.9%
2477	49.9%
2478	49.9%

So far so good.  I then created C, and moved 2478 to it.  A 3-second 
"top" gave almost a 15% error from the desired behaviour for one group:

2479	76.2%
2477	72.2%
2478	51.0%


A 10-sec average was better, but we still see errors of 6%:
2478	72.8%
2477	64.0%
2479	63.2%


I then set up a scenario with 3 tasks in A, 2 in B, and 1 in C.  A 
10-second "top" gave errors of up to 6.5%:
2500	60.1%
2491	37.5%
2492	37.4%
2489	25.0%
2488	19.9%
2490	19.9%

a re-test gave errors of up to 8.1%:

2534	74.8%
2533	30.1%
2532	30.0%
2529	25.0%
2530	20.0%
2531	20.0%

Another retest gave perfect results initially:

2559	66.5%
2560	33.4%
2561	33.3%
2564	22.3%
2562	22.2%
2563	22.1%

but moving 2564 from group A to C and then back to A disturbed the 
perfect division of time and resulted in almost the same utilization 
pattern as above:

2559	74.9%
2560	30.0%
2561	29.6%
2564	25.3%
2562	20.0%
2563	20.0%

It looks like perfect balancing is a metastable state where it can stay 
happily for some time, but any small disturbance may be enough to kick 
it over into a more stable but incorrect state.  Once we get into such 
an incorrect division of time, it appears very difficult to return to 
perfect balancing.

Chris




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ