[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11041923.1212400091150.kamezawa.hiroyu@jp.fujitsu.com>
Date: Mon, 2 Jun 2008 18:48:11 +0900 (JST)
From: kamezawa.hiroyu@...fujitsu.com
To: balbir@...ux.vnet.ibm.com
Cc: kamezawa.hiroyu@...fujitsu.com, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>, xemul@...nvz.org,
menage@...gle.com, yamamoto@...inux.co.jp, lizf@...fujitsu.com
Subject: Re: Re: [RFC][PATCH 1/2] memcg: res_counter hierarchy
>Why don't we add soft limits, so that we don't have to go to the kernel and
>change limits frequently. One missing piece in the memory controller is that
we
>don't shrink the memory controller when limits change or when tasks move. I
>think soft limits is a better solution.
>
My code adds shirinking_at_limit_change. I'm now try to write migrate_resouces
_at_task_move. (But seems not so easy to be implemented in
clean/fast way.)
I have no objection to soft-limit if it's easy to be implemented. (I wrote
my explanation was just an example and we could add more knobs.)
_But_ I think that something to control multiple cgroups with regard to hierar
chy under some policy never be a simple one. Adding some knobs for each cgroup
s to do soft-limit will be simple one if no hirerachy.
Memory controller's difference from scheduler's hirerachy is that we have to
do multilevel page reclaim with feedback under some policy (not only one..).
Even without hierarhcy, we _did_ make the kernel's LRU logic more complicated.
But we can get a help from the middleware here, I think.
My goal is never to make cgroup slow or complicated. If it's slow,
I'd like to say "ok, please use VMware.It's simpler and enough fast for you."
"How fast it works rather than Hardware-Virtualization" is the most
important for me. It should be much more faster.
>Thanks for patiently explaining all of this.
>
Thanks, I'm sorry for my poor explanation skill.
Regards,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists