lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1302519648.2388.19.camel@twins>
Date:	Mon, 11 Apr 2011 13:00:48 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	mingo@...hat.com, hpa@...or.com, linux-kernel@...r.kernel.org,
	kenchen@...gle.com, stable@...nel.org, tglx@...utronix.de,
	mingo@...e.hu
Cc:	linux-tip-commits@...r.kernel.org
Subject: Re: [tip:sched/urgent] sched: Fix sched-domain avg_load calculation

On Mon, 2011-04-11 at 10:46 +0000, tip-bot for Ken Chen wrote:
> Commit-ID:  b0432d8f162c7d5d9537b4cb749d44076b76a783
> Gitweb:     http://git.kernel.org/tip/b0432d8f162c7d5d9537b4cb749d44076b76a783
> Author:     Ken Chen <kenchen@...gle.com>
> AuthorDate: Thu, 7 Apr 2011 17:23:22 -0700
> Committer:  Ingo Molnar <mingo@...e.hu>
> CommitDate: Mon, 11 Apr 2011 11:08:54 +0200
> 
> sched: Fix sched-domain avg_load calculation
> 
> In function find_busiest_group(), the sched-domain avg_load isn't
> calculated at all if there is a group imbalance within the domain. This
> will cause erroneous imbalance calculation.
> 
> The reason is that calculate_imbalance() sees sds->avg_load = 0 and it
> will dump entire sds->max_load into imbalance variable, which is used
> later on to migrate entire load from busiest CPU to the puller CPU.
> 
> This has two really bad effect:
> 
> 1. stampede of task migration, and they won't be able to break out
>    of the bad state because of positive feedback loop: large load
>    delta -> heavier load migration -> larger imbalance and the cycle
>    goes on.
> 
> 2. severe imbalance in CPU queue depth.  This causes really long
>    scheduling latency blip which affects badly on application that
>    has tight latency requirement.
> 
> The fix is to have kernel calculate domain avg_load in both cases. This
> will ensure that imbalance calculation is always sensible and the target
> is usually half way between busiest and puller CPU.
> 
> Signed-off-by: Ken Chen <kenchen@...gle.com>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> Cc: <stable@...nel.org>
> Link: http://lkml.kernel.org/r/20110408002322.3A0D812217F@elm.corp.google.com
> Signed-off-by: Ingo Molnar <mingo@...e.hu> 

This was caused by 866ab43ef (sched: Fix the group_imb logic) which is
only in .39-rc.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ