[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20081128181835.GA12948@balbir.in.ibm.com>
Date: Fri, 28 Nov 2008 23:48:35 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Pavel Emelyanov <xemul@...nvz.org>,
Li Zefan <lizf@...fujitsu.com>, Paul Menage <menage@...gle.com>
Subject: Re: [RFC][PATCH -mmotm 0/2] misc patches for memory cgroup
hierarchy
* Daisuke Nishimura <nishimura@....nes.nec.co.jp> [2008-11-28 18:02:52]:
> Hi.
>
> I'm writing some patches for memory cgroup hierarchy.
>
> I think KAMEZAWA-san's cgroup-id patches are the most important pathes now,
> but I post these patches as RFC before going further.
>
> Patch descriptions:
> - [1/2] take account of memsw
> mem_cgroup_hierarchical_reclaim checks only mem->res now.
> It should also check mem->memsw when do_swap_account.
> - [2/2] avoid oom
> In previous implementation, mem_cgroup_try_charge checked the return
> value of mem_cgroup_try_to_free_pages, and just retried if some pages
> had been reclaimed.
> But now, try_charge(and mem_cgroup_hierarchical_reclaim called from it)
> only checks whether the usage is less than the limit.
> I see oom easily in some tests which didn't cause oom before.
>
> Both patches are for memory-cgroup-hierarchical-reclaim-v4 patch series.
>
> My current plan for memory cgroup hierarchy:
> - If hierarchy is enabled, limit of child should not exceed that of parent.
> - Change other calls for mem_cgroup_try_to_free_page() to
> mem_cgroup_hierarchical_reclaim() if possible.
>
Thanks, Daisuke,
I am in a conference and taken a quick look. The patches seem sane,
but I've not reviewed them carefully. I'll revert back next week
--
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists