[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081205135232.GB10004@balbir.in.ibm.com>
Date: Fri, 5 Dec 2008 19:22:32 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Pavel Emelyanov <xemul@...nvz.org>,
Li Zefan <lizf@...fujitsu.com>, Paul Menage <menage@...gle.com>
Subject: Re: [RFC][PATCH -mmotm 3/4] memcg: avoid dead lock caused by race
between oom and cpuset_attach
* Daisuke Nishimura <nishimura@....nes.nec.co.jp> [2008-12-05 21:24:50]:
> mpol_rebind_mm(), which can be called from cpuset_attach(), does down_write(mm->mmap_sem).
> This means down_write(mm->mmap_sem) can be called under cgroup_mutex.
>
> OTOH, page fault path does down_read(mm->mmap_sem) and calls mem_cgroup_try_charge_xxx(),
> which may eventually calls mem_cgroup_out_of_memory(). And mem_cgroup_out_of_memory()
> calls cgroup_lock().
> This means cgroup_lock() can be called under down_read(mm->mmap_sem).
>
> If those two paths race, dead lock can happen.
>
> This patch avoid this dead lock by:
> - remove cgroup_lock() from mem_cgroup_out_of_memory().
> - define new mutex (memcg_tasklist) and serialize mem_cgroup_move_task()
> (->attach handler of memory cgroup) and mem_cgroup_out_of_memory.
A similar race has been reported for cpuset_migrate_mm(), which is
called holding the cgroup_mutex and further calls do_migrate_pages,
which can call reclaim and thus try to acquire cgroup_lock. If we
avoid reclaiming pages with cpuset_migrate_mm(), as the first patch
did, it also solves the reported race.
>
> Signed-off-by: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Looks good to me
Acked-by: Balbir Singh <balbir@...ux.vnet.ibm.com>
--
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists