[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <484451F7.5090001@nortel.com>
Date: Mon, 02 Jun 2008 14:03:03 -0600
From: "Chris Friesen" <cfriesen@...tel.com>
To: vatsa@...ux.vnet.ibm.com
CC: linux-kernel@...r.kernel.org, mingo@...e.hu,
a.p.zijlstra@...llo.nl, pj@....com,
Balbir Singh <balbir@...ibm.com>,
aneesh.kumar@...ux.vnet.ibm.com, dhaval@...ux.vnet.ibm.com
Subject: Re: fair group scheduler not so fair?
Srivatsa Vaddagiri wrote:
> That seems to be pretty difficult to achieve with the per-cpu runqueue
> and smpnice based load balancing approach we have now.
Okay, thanks.
>>Initially I tried a simple setup with three hogs all in the default "sys"
>>group. Over multiple retries using 10-sec intervals, sometimes it gave
>>roughly 67% for each task, other times it settled into a 100/50/50 split
>>that remained stable over time.
> Was this with imbalance_pct set to 105? Does it make any difference if
> you change imbalance_pct to say 102?
It was set to 105 initially. I later reproduced the problem with 102.
For example, the following was with 102, with three tasks created in the
sys class. Based on the runtime, pid 2499 has been getting a cpu all to
itself for over a minute.
2499 cfriesen 20 0 3800 392 336 R 99.8 0.0 1:05.85 cat
2496 cfriesen 20 0 3800 392 336 R 50.0 0.0 0:32.95 cat
2498 cfriesen 20 0 3800 392 336 R 50.0 0.0 0:32.97 cat
The next run was much better, with sub-second fairness after a minute.
2505 cfriesen 20 0 3800 392 336 R 68.2 0.0 1:00.32 cat
2506 cfriesen 20 0 3800 392 336 R 66.9 0.0 0:59.85 cat
2503 cfriesen 20 0 3800 392 336 R 64.2 0.0 1:00.21 cat
The lack of predictability is disturbing, as it implies some sensitivity
to the specific test conditions.
>>With three groups, one task in each, I tried both 10 and 60 second
>>intervals. The longer interval looked better but was still up to 0.8% off:
>
>
> I honestly don't know if we can do better than 0.8%! In any case, I'd
> expect that it would require more drastic changes.
No problem. It's still far superior than the SMP performance of CKRM,
which is what we're currently using (although heavily modified).
Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists