[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160217124100.GE29196@dhcp22.suse.cz>
Date: Wed, 17 Feb 2016 13:41:00 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: akpm@...ux-foundation.org, rientjes@...gle.com, mgorman@...e.de,
oleg@...hat.com, torvalds@...ux-foundation.org, hughd@...gle.com,
andrea@...nel.org, riel@...hat.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/6] mm,oom: exclude TIF_MEMDIE processes from candidates.
On Wed 17-02-16 19:29:33, Tetsuo Handa wrote:
> >From 142b08258e4c60834602e9b0a734564208bc6397 Mon Sep 17 00:00:00 2001
> From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
> Date: Wed, 17 Feb 2016 16:29:29 +0900
> Subject: [PATCH 1/6] mm,oom: exclude TIF_MEMDIE processes from candidates.
>
> The OOM reaper kernel thread can reclaim OOM victim's memory before
> the victim releases it.
If this is aimed to be preparatory work, which I am not convinced about
to be honest, then referring to oom reaper is confusing and misleading.
> But it is possible that a TIF_MEMDIE thread
> gets stuck at down_read(&mm->mmap_sem) in exit_mm() called from
> do_exit() due to one of !TIF_MEMDIE threads doing a GFP_KERNEL
> allocation between down_write(&mm->mmap_sem) and up_write(&mm->mmap_sem)
> (e.g. mmap()). In that case, we need to use SysRq-f (manual invocation
> of the OOM killer) because down_read_trylock(&mm->mmap_sem) by the OOM
> reaper will not succeed.
But all the tasks sharing the mm with the oom victim will have
fatal_signal_pending and so they will get access to memory reserves and
that should help them to finish the allocation request. So the above
text is misleading.
If the down_read is blocked because down_write is blocked then a better
solution is to make down_write_killable which has been already proposed.
> Also, there are other situations where the OOM
> reaper cannot reap the victim's memory (e.g. CONFIG_MMU=n,
there was no clear evidence that this is a problem on !MMU
configurations.
> victim's memory is shared with OOM-unkillable processes) which will
> require manual SysRq-f for making progress.
Sharing mm with a task which is hidden from the OOM killer is a clear
misconfiguration IMO.
> However, it is possible that the OOM killer chooses the same OOM victim
> forever which already has TIF_MEMDIE.
This can happen only for the sysrq+f case AFAICS. Regular OOM killer
will stop scanning after it encounters the first TIF_MEMDIE task.
If you want to handle the sysrq+f case then it should be imho explicit.
Something I've tries here as patch 1/2
http://lkml.kernel.org/r/1452632425-20191-1-git-send-email-mhocko@kernel.org
which has been nacked. Maybe you can try again without
fatal_signal_pending resp. task_will_free_mem checks which were
controversial back then. Hiding this into find_lock_non_victim_task_mm
is just making the code more obscure and harder to read.
> This is effectively disabling
> SysRq-f. This patch excludes processes which has a TIF_MEMDIE thread
> from OOM victim candidates.
>
> Signed-off-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
In short I dislike this patch. It makes the code harder to read and the
same can be solved more straightforward:
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 078e07ec0906..68cc130c163b 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -281,6 +281,8 @@ enum oom_scan_t oom_scan_process_thread(struct oom_control *oc,
if (test_tsk_thread_flag(task, TIF_MEMDIE)) {
if (!is_sysrq_oom(oc))
return OOM_SCAN_ABORT;
+ else
+ return OOM_SCAN_CONTINUE;
}
if (!task->mm)
return OOM_SCAN_CONTINUE;
@@ -719,6 +721,9 @@ void oom_kill_process(struct oom_control *oc, struct task_struct *p,
if (process_shares_mm(child, p->mm))
continue;
+
+ if (is_sysrq_oom(oc) && test_tsk_thread_flag(child, TIF_MEMDIE))
+ continue;
/*
* oom_badness() returns 0 if the thread is unkillable
*/
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists