[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091221121619.GJ4489@kernel.dk>
Date: Mon, 21 Dec 2009 13:16:19 +0100
From: Jens Axboe <jens.axboe@...cle.com>
To: Munehiro Ikeda <m-ikeda@...jp.nec.com>
Cc: Corrado Zoccolo <czoccolo@...il.com>,
Vivek Goyal <vgoyal@...hat.com>, linux-kernel@...r.kernel.org,
nauman@...gle.com, lizf@...fujitsu.com, ryov@...inux.co.jp,
fernando@....ntt.co.jp, taka@...inux.co.jp,
guijianfeng@...fujitsu.com, jmoyer@...hat.com, Alan.Brunelle@...com
Subject: Re: [RFC] CFQ group scheduling structure organization
On Thu, Dec 17 2009, Munehiro Ikeda wrote:
> Hello,
>
> Corrado Zoccolo wrote, on 12/17/2009 06:41 AM:
>> Hi,
>> On Wed, Dec 16, 2009 at 11:52 PM, Vivek Goyal<vgoyal@...hat.com> wrote:
>>> Hi All,
>>>
>>> With some basic group scheduling support in CFQ, there are few questions
>>> regarding how group structure should look like in CFQ.
>>>
>>> Currently, grouping looks as follows. A, and B are two cgroups created by
>>> user.
>>>
>>> [snip]
>>>
>>> Proposal 4:
>>> ==========
>>> Treat task and group at same level. Currently groups are at top level and
>>> at second level are tasks. View the whole hierarchy as follows.
>>>
>>>
>>> service-tree
>>> / | \ \
>>> T1 T2 G1 G2
>>>
>>> Here T1 and T2 are two tasks in root group and G1 and G2 are two cgroups
>>> created under root.
>>>
>>> In this kind of scheme, any RT task in root group will still be system
>>> wide RT even if we create groups G1 and G2.
>>>
>>> So what are the issues?
>>>
>>> - I talked to few folks and everybody found this scheme not so intutive.
>>> Their argument was that once I create a cgroup, say A, under root, then
>>> bandwidth should be divided between "root" and "A" proportionate to
>>> the weight.
>>>
>>> It is not very intutive that group is competing with all the tasks
>>> running in root group. And disk share of newly created group will change
>>> if more tasks fork in root group. So it is highly dynamic and not
>>> static hence un-intutive.
>
> I agree it might be dynamic but I don't think it's un-intuitive.
> I think it's reasonable that disk share of a group is
> influenced by the number of tasks running in root group,
> because the root group is shared by the tasks and groups from
> the viewpoint of cgroup I/F, and they really share disk bandwidth.
Agree, this is my preferred solution as well. There are definitely valid
cases for both doing system wide RT and system wide idle, and there are
definitely valid reasons for doing that inside a single group as well.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists