[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46523081.6050007@bigpond.net.au>
Date: Tue, 22 May 2007 09:51:29 +1000
From: Peter Williams <pwil3058@...pond.net.au>
To: Dmitry Adamushko <dmitry.adamushko@...il.com>
CC: Ingo Molnar <mingo@...e.hu>,
Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [patch] CFS scheduler, -v12
Dmitry Adamushko wrote:
> On 18/05/07, Peter Williams <pwil3058@...pond.net.au> wrote:
> [...]
>> One thing that might work is to jitter the load balancing interval a
>> bit. The reason I say this is that one of the characteristics of top
>> and gkrellm is that they run at a more or less constant interval (and,
>> in this case, X would also be following this pattern as it's doing
>> screen updates for top and gkrellm) and this means that it's possible
>> for the load balancing interval to synchronize with their intervals
>> which in turn causes the observed problem.
>
> Hum.. I guess, a 0/4 scenario wouldn't fit well in this explanation..
No, and I haven't seen one.
> all 4 spinners "tend" to be on CPU0 (and as I understand each gets
> ~25% approx.?), so there must be plenty of moments for
> *idle_balance()* to be called on CPU1 - as gkrellm, top and X consume
> together just a few % of CPU. Hence, we should not be that dependent
> on the load balancing interval here..
The split that I see is 3/1 and neither CPU seems to be favoured with
respect to getting the majority. However, top, gkrellm and X seem to be
always on the CPU with the single spinner. The CPU% reported by top is
approx. 33%, 33%, 33% and 100% for the spinners.
If I renice the spinners to -10 (so that there load weights dominate the
run queue load calculations) the problem goes away and the spinner to
CPU allocation is 2/2 and top reports them all getting approx. 50% each.
It's also worth noting that I've had tests where the allocation started
out 2/2 and the system changed it to 3/1 where it stabilized. So it's
not just a case of bad luck with the initial CPU allocation when the
tasks start and the load balancing failing to fix it (which was one of
my earlier theories).
>
> (unlikely consiparacy theory)
It's not a conspiracy. It's just dumb luck. :-)
> - idle_balance() and load_balance() (the
> later is dependent on the load balancing interval which can be in
> sync. with top/gkerllm activities as you suggest) move always either
> top or gkerllm between themselves.. esp. if X is reniced (so it gets
> additional "weight") and happens to be active (on CPU1) when
> load_balance() (kicked from scheduler_tick()) runs..
>
> p.s. it's mainly theoretical specualtions.. I recently started looking
> at the load-balancing code (unfortunatelly, don't have an SMP machine
> which I can upgrade to the recent kernel) and so far for me it's
> mainly about getting sure I see things sanely.
I'm playing with some jitter experiments at the moment. The amount of
jitter needs to be small (a few tenths of a second) as the
synchronization (if it's happening) is happening at the seconds level as
the intervals for top and gkrellm will be in the 1 to 5 second range (I
guess -- I haven't checked) and the load balancing is every 60 seconds.
Peter
--
Peter Williams pwil3058@...pond.net.au
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists