lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <465275EF.8060905@bigpond.net.au>
Date:	Tue, 22 May 2007 14:47:43 +1000
From:	Peter Williams <pwil3058@...pond.net.au>
To:	Dmitry Adamushko <dmitry.adamushko@...il.com>
CC:	Ingo Molnar <mingo@...e.hu>,
	Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [patch] CFS scheduler, -v12

Peter Williams wrote:
> Dmitry Adamushko wrote:
>> On 18/05/07, Peter Williams <pwil3058@...pond.net.au> wrote:
>> [...]
>>> One thing that might work is to jitter the load balancing interval a
>>> bit.  The reason I say this is that one of the characteristics of top
>>> and gkrellm is that they run at a more or less constant interval (and,
>>> in this case, X would also be following this pattern as it's doing
>>> screen updates for top and gkrellm) and this means that it's possible
>>> for the load balancing interval to synchronize with their intervals
>>> which in turn causes the observed problem.
>>
>> Hum.. I guess, a 0/4 scenario wouldn't fit well in this explanation..
> 
> No, and I haven't seen one.
> 
>> all 4 spinners "tend" to be on CPU0 (and as I understand each gets
>> ~25% approx.?), so there must be plenty of moments for
>> *idle_balance()* to be called on CPU1 - as gkrellm, top and X consume
>> together just a few % of CPU. Hence, we should not be that dependent
>> on the load balancing interval here..
> 
> The split that I see is 3/1 and neither CPU seems to be favoured with 
> respect to getting the majority.  However, top, gkrellm and X seem to be 
> always on the CPU with the single spinner.  The CPU% reported by top is 
> approx. 33%, 33%, 33% and 100% for the spinners.
> 
> If I renice the spinners to -10 (so that there load weights dominate the 
> run queue load calculations) the problem goes away and the spinner to 
> CPU allocation is 2/2 and top reports them all getting approx. 50% each.

For no good reason other than curiosity, I tried a variation of this 
experiment where I reniced the spinners to 10 instead of -10 and, to my 
surprise, they were allocated 2/2 to the CPUs on average.  I say on 
average because the allocations were a little more volatile and 
occasionally 0/4 splits would occur but these would last for less than 
one top cycle before the 2/2 was re-established.  The quickness of these 
recoveries would indicate that it was most likely the idle balance 
mechanism that restored the balance.

This may point the finger at the tick based load balance mechanism being 
too conservative in when it decides whether tasks need to be moved.  In 
the case where the spinners are at nice == 0, the idle balance mechanism 
never comes into play as the 0/4 split is never seen so only the tick 
based mechanism is in force in this case and this is where the anomalies 
are seen.

This tick rebalance mechanism only situation is also true for the nice 
== -10 case but in this case the high load weights of the spinners 
overcomes the tick based load balancing mechanism's conservatism e.g. 
the difference in queue loads for a 1/3 split in this case is the 
equivalent to the difference that would be generated by an imbalance of 
about 18 nice == 0 spinners i.e. too big to be ignored.

The evidence seems to indicate that IF a rebalance operation gets 
initiated then the right amount of load will get moved.

This new evidence weakens (but does not totally destroy) my 
synchronization (a.k.a. conspiracy) theory.

Peter
PS As the total load weight for 4 nice == 10 tasks is only about 40% of 
the load weight of a single nice == 0 task, the occasional 0/4 split in 
the spinners at nice == 10 case is not unexpected as it would be the 
desirable allocation if there were exactly one other running task at 
nice == 0.
-- 
Peter Williams                                   pwil3058@...pond.net.au

"Learning, n. The kind of ignorance distinguishing the studious."
  -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ