[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20081206123503.4e32c3fb.d-nishimura@mtf.biglobe.ne.jp>
Date: Sat, 6 Dec 2008 12:35:03 +0900
From: Daisuke Nishimura <d-nishimura@....biglobe.ne.jp>
To: "KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>
Cc: "LKML" <linux-kernel@...r.kernel.org>,
"linux-mm" <linux-mm@...ck.org>,
"Balbir Singh" <balbir@...ux.vnet.ibm.com>,
"Pavel Emelyanov" <xemul@...nvz.org>,
"Li Zefan" <lizf@...fujitsu.com>,
"Paul Menage" <menage@...gle.com>, d-nishimura@....biglobe.ne.jp,
"Daisuke Nishimura" <nishimura@....nes.nec.co.jp>
Subject: Re: [RFC][PATCH -mmotm 3/4] memcg: avoid dead lock caused by
racebetween oom and cpuset_attach
On Fri, 5 Dec 2008 22:27:10 +0900 (JST)
"KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com> wrote:
> Daisuke Nishimura said:
> > mpol_rebind_mm(), which can be called from cpuset_attach(), does
> > down_write(mm->mmap_sem).
> > This means down_write(mm->mmap_sem) can be called under cgroup_mutex.
> >
> > OTOH, page fault path does down_read(mm->mmap_sem) and calls
> > mem_cgroup_try_charge_xxx(),
> > which may eventually calls mem_cgroup_out_of_memory(). And
> > mem_cgroup_out_of_memory()
> > calls cgroup_lock().
> > This means cgroup_lock() can be called under down_read(mm->mmap_sem).
> >
> good catch.
>
Thanks.
> > If those two paths race, dead lock can happen.
> >
> > This patch avoid this dead lock by:
> > - remove cgroup_lock() from mem_cgroup_out_of_memory().
> agree to this.
>
> > - define new mutex (memcg_tasklist) and serialize mem_cgroup_move_task()
> > (->attach handler of memory cgroup) and mem_cgroup_out_of_memory.
> >
> Hmm...seems temporal fix (and adding new global lock...)
> But ok, we need fix. revist this later.
>
I agree.
> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu,com>
>
Thank you.
Daisuke Nishimura.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists