[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0707251014520.14515@tongli.jf.intel.com>
Date: Wed, 25 Jul 2007 10:23:14 -0700 (PDT)
From: Tong Li <tong.n.li@...el.com>
To: Ingo Molnar <mingo@...e.hu>
cc: linux-kernel@...r.kernel.org, Chris Snook <csnook@...hat.com>
Subject: Re: [RFC] scheduler: improve SMP fairness in CFS
On Wed, 25 Jul 2007, Ingo Molnar wrote:
> they now nicely converte to the expected 80% long-term CPU usage.
>
> so, could you please try the patch below, does it work for you too?
>
Thanks for the patch. It doesn't work well on my 8-way box. Here's the
output at two different times. It's also changing all the time.
[time 1]
3717 tongli 20 0 7864 108 16 R 85 0.0 1:38.00 loop
3721 tongli 20 0 7864 108 16 R 85 0.0 1:37.51 loop
3723 tongli 20 0 7864 108 16 R 84 0.0 1:30.48 loop
3714 tongli 20 0 7864 108 16 R 83 0.0 1:33.25 loop
3720 tongli 20 0 7864 108 16 R 81 0.0 1:31.89 loop
3715 tongli 20 0 7864 108 16 R 78 0.0 1:36.38 loop
3719 tongli 20 0 7864 108 16 R 78 0.0 1:28.80 loop
3718 tongli 20 0 7864 108 16 R 78 0.0 1:33.84 loop
3722 tongli 20 0 7864 108 16 R 76 0.0 1:34.69 loop
3716 tongli 20 0 7864 108 16 R 70 0.0 1:33.15 loop
[time 2]
3721 tongli 20 0 7864 108 16 R 91 0.0 1:44.67 loop
3720 tongli 20 0 7864 108 16 R 89 0.0 1:40.55 loop
3722 tongli 20 0 7864 108 16 R 86 0.0 1:41.40 loop
3723 tongli 20 0 7864 108 16 R 82 0.0 1:38.94 loop
3715 tongli 20 0 7864 108 16 R 82 0.0 1:43.09 loop
3717 tongli 20 0 7864 108 16 R 79 0.0 1:46.38 loop
3718 tongli 20 0 7864 108 16 R 79 0.0 1:40.72 loop
3714 tongli 20 0 7864 108 16 R 77 0.0 1:40.17 loop
3719 tongli 20 0 7864 108 16 R 73 0.0 1:35.41 loop
3716 tongli 20 0 7864 108 16 R 61 0.0 1:38.66 loop
I'm running 2.6.23-rc1 with your patch on a two-socket quad-core system.
Top refresh rate is the default 3s.
tong
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists