lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090724043001.GC5304@in.ibm.com>
Date:	Fri, 24 Jul 2009 10:00:01 +0530
From:	Bharata B Rao <bharata@...ux.vnet.ibm.com>
To:	Ken Chen <kenchen@...gle.com>
Cc:	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: Re: CFS group scheduler fairness broken starting from 2.6.29-rc1

On Thu, Jul 23, 2009 at 03:17:18PM -0700, Ken Chen wrote:
> On Thu, Jul 23, 2009 at 12:57 AM, Bharata B
> Rao<bharata@...ux.vnet.ibm.com> wrote:
> > Hi,
> >
> > Group scheduler fainess is broken since 2.6.29-rc1. git bisect led me
> > to this commit:
> >
> > commit ec4e0e2fe018992d980910db901637c814575914
> > Author: Ken Chen <kenchen@...gle.com>
> > Date:   Tue Nov 18 22:41:57 2008 -0800
> >
> >    sched: fix inconsistency when redistribute per-cpu tg->cfs_rq shares
> >
> >    Impact: make load-balancing more consistent
> > ....
> >
> > ======================================================================
> >                        % CPU time division b/n groups
> > Group           2.6.29-rc1              2.6.29-rc1 w/o the above patch
> > ======================================================================
> > a with 8 tasks  44                      31
> > b with 5 tasks  32                      34
> > c with 3 tasks  22                      34
> > ======================================================================
> > All groups had equal shares.
> 
> What value did you use for each task_group's share?  For very large
> value of tg->shares, it could be that all of the boost went to one CPU
> and subsequently causes load-balancer to shuffle tasks around.  Do you
> see any unexpected task migration?

Used default 1024 for each group.

Without your patch, each of the tasks see around 165 migrations during
a 60s run, but with your patch, they see 125 migrations (as per
se.nr_migrations). I am using a 8CPU machine here.

Regards,
Bharata.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ