[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20081204183428.19cbd22d.nishimura@mxp.nes.nec.co.jp>
Date: Thu, 4 Dec 2008 18:34:28 +0900
From: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"balbir@...ux.vnet.ibm.com" <balbir@...ux.vnet.ibm.com>,
"kosaki.motohiro@...fujitsu.com" <kosaki.motohiro@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"lizf@...fujitsu.com" <lizf@...fujitsu.com>,
Paul Menage <menage@...gle.com>, nishimura@....nes.nec.co.jp
Subject: Re: [Experimental][PATCH 19/21] memcg-fix-pre-destroy.patch
Added CC: Paul Menage <menage@...gle.com>
> @@ -2096,7 +2112,7 @@ static void mem_cgroup_get(struct mem_cg
> static void mem_cgroup_put(struct mem_cgroup *mem)
> {
> if (atomic_dec_and_test(&mem->refcnt)) {
> - if (!mem->obsolete)
> + if (!css_under_removal(&mem->css))
> return;
> mem_cgroup_free(mem);
> }
I don't think it's safe to check css_under_removal here w/o cgroup_lock.
(It's safe *NOW* just because memcg is the only user of css->refcnt.)
As Li said before, css_under_removal doesn't necessarily mean
this this group has been destroyed, but mem_cgroup will be freed.
But adding cgroup_lock/unlock here causes another dead lock,
because mem_cgroup_get_next_node calls mem_cgroup_put.
hmm.. hierarchical reclaim code will be re-written completely by [21/21],
so would it be better to change patch order or to take another approach ?
Thanks,
Daisuke Nishimura.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists