[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1401091335450.31538@chino.kir.corp.google.com>
Date: Thu, 9 Jan 2014 13:40:10 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Michal Hocko <mhocko@...e.cz>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
cgroups@...r.kernel.org,
"Eric W. Biederman" <ebiederm@...ssion.com>
Subject: Re: [PATCH] memcg: Do not hang on OOM when killed by userspace OOM
access to memory reserves
On Thu, 9 Jan 2014, Michal Hocko wrote:
> Eric has reported that he can see task(s) stuck in memcg OOM handler
> regularly. The only way out is to
> echo 0 > $GROUP/memory.oom_controll
> His usecase is:
> - Setup a hierarchy with memory and the freezer
> (disable kernel oom and have a process watch for oom).
> - In that memory cgroup add a process with one thread per cpu.
> - In one thread slowly allocate once per second I think it is 16M of ram
> and mlock and dirty it (just to force the pages into ram and stay there).
> - When oom is achieved loop:
> * attempt to freeze all of the tasks.
> * if frozen send every task SIGKILL, unfreeze, remove the directory in
> cgroupfs.
>
> Eric has then pinpointed the issue to be memcg specific.
>
> All tasks are sitting on the memcg_oom_waitq when memcg oom is disabled.
> Those that have received fatal signal will bypass the charge and should
> continue on their way out. The tricky part is that the exit path might
> trigger a page fault (e.g. exit_robust_list), thus the memcg charge,
> while its memcg is still under OOM because nobody has released any
> charges yet.
> Unlike with the in-kernel OOM handler the exiting task doesn't get
> TIF_MEMDIE set so it doesn't shortcut futher charges of the killed task
> and falls to the memcg OOM again without any way out of it as there are
> no fatal signals pending anymore.
>
> This patch fixes the issue by checking PF_EXITING early in
> __mem_cgroup_try_charge and bypass the charge same as if it had fatal
> signal pending or TIF_MEMDIE set.
>
> Normally exiting tasks (aka not killed) will bypass the charge now but
> this should be OK as the task is leaving and will release memory and
> increasing the memory pressure just to release it in a moment seems
> dubious wasting of cycles. Besides that charges after exit_signals
> should be rare.
>
> Reported-by: Eric W. Biederman <ebiederm@...ssion.com>
> Signed-off-by: Michal Hocko <mhocko@...e.cz>
Is this tested?
> ---
> mm/memcontrol.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b8dfed1b9d87..b86fbb04b7c6 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2685,7 +2685,8 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
> * MEMDIE process.
> */
> if (unlikely(test_thread_flag(TIF_MEMDIE)
> - || fatal_signal_pending(current)))
> + || fatal_signal_pending(current))
> + || current->flags & PF_EXITING)
> goto bypass;
>
> if (unlikely(task_in_memcg_oom(current)))
This would become problematic if significant amount of memory is charged
in the exit() path. I don't know of an egregious amount of memory being
allocated and charged after PF_EXITING is set, but if it happens in the
future then this could potentially cause system oom conditions even in
memcg configurations that are designed such as the one Tejun suggested to
be able to handle such conditions in userspace:
___root___
/ \
user oom
/ \ / \
A B C D
where the limit of user is equal to the amount of system memory minus
whatever amount of memory is needed by the system oom handler attached as
a descendant of oom and still allows the limits of A + B to exceed the
limit of user.
So how do we ensure that memory allocations in the exit() path don't cause
system oom conditions whereas the above configuration no longer provides
any strict guarantee?
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists