lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b040c32a0907231517l265a9528w628d48fa3625e261@mail.gmail.com>
Date:	Thu, 23 Jul 2009 15:17:18 -0700
From:	Ken Chen <kenchen@...gle.com>
To:	bharata@...ux.vnet.ibm.com
Cc:	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: Re: CFS group scheduler fairness broken starting from 2.6.29-rc1

On Thu, Jul 23, 2009 at 12:57 AM, Bharata B
Rao<bharata@...ux.vnet.ibm.com> wrote:
> Hi,
>
> Group scheduler fainess is broken since 2.6.29-rc1. git bisect led me
> to this commit:
>
> commit ec4e0e2fe018992d980910db901637c814575914
> Author: Ken Chen <kenchen@...gle.com>
> Date:   Tue Nov 18 22:41:57 2008 -0800
>
>    sched: fix inconsistency when redistribute per-cpu tg->cfs_rq shares
>
>    Impact: make load-balancing more consistent
> ....
>
> ======================================================================
>                        % CPU time division b/n groups
> Group           2.6.29-rc1              2.6.29-rc1 w/o the above patch
> ======================================================================
> a with 8 tasks  44                      31
> b with 5 tasks  32                      34
> c with 3 tasks  22                      34
> ======================================================================
> All groups had equal shares.

What value did you use for each task_group's share?  For very large
value of tg->shares, it could be that all of the boost went to one CPU
and subsequently causes load-balancer to shuffle tasks around.  Do you
see any unexpected task migration?

- Ken

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ