[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1328122597.90242.YahooMailNeo@web113506.mail.gq1.yahoo.com>
Date: Wed, 1 Feb 2012 10:56:37 -0800 (PST)
From: Vinay Shankarkumar <vc376@...oo.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Cc: "vc376@...oo.com" <vc376@...oo.com>
Subject: CFS scheduler behavior regards root group
Hello.
we are testing CPU shares concept with cgroups enabled, on the .32 kernel
and observing this --
When the load on the root group (init group) is increased, the performance
of the processes in the child cgroups decreases. Is this expected? should the share
ratio if the groups (root and child) hold good and the child performance stay same
as before the increase in load on the root group?
I think this behavior is in accordance with the below comment, is that correct?
since there is no ->se associated with the init task group, what we are observing is valid?
/*
* How much cpu bandwidth does init_task_group get? * * In case of task-groups formed thr' the cgroup filesystem, * it * gets 100% of the cpu resources in the system. This overall * system cpu resource is divided among the tasks of * init_task_group and its child task-groups in a fair manner, * based on each entity's (task or task-group's) weight * (se->load.weight). * * In other words, if init_task_group has 10 tasks of weight * 1024) and two child groups A0 and A1 (of weight 1024 each), * then A0's share of the cpu resource is: * * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = * 8.33% * * We achieve this by letting init_task_group's tasks sit * directly in rq->cfs (i.e init_task_group->se[] = NULL). */
Is there a way to limit the share of the root cgroup so that the behavior we are
observing can be changed?
Please kindly copy me in the responses as I have not yet subscribed to the mailing list.
Thanks in advance,
-Vinay.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists