[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160322154533.c269d76a65b81bb1b8f72545@linux-foundation.org>
Date: Tue, 22 Mar 2016 15:45:33 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Michal Hocko <mhocko@...nel.org>
Cc: <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
David Rientjes <rientjes@...gle.com>,
Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH 2/9] mm, oom: introduce oom reaper
On Tue, 22 Mar 2016 12:00:19 +0100 Michal Hocko <mhocko@...nel.org> wrote:
> This is based on the idea from Mel Gorman discussed during LSFMM 2015 and
> independently brought up by Oleg Nesterov.
What happened to oom-reaper-handle-mlocked-pages.patch? I have it in
-mm but I don't see it in this v6.
From: Michal Hocko <mhocko@...e.com>
Subject: oom reaper: handle mlocked pages
__oom_reap_vmas current skips over all mlocked vmas because they need a
special treatment before they are unmapped. This is primarily done for
simplicity. There is no reason to skip over them and reduce the amount of
reclaimed memory. This is safe from the semantic point of view because
try_to_unmap_one during rmap walk would keep tell the reclaim to cull the
page back and mlock it again.
munlock_vma_pages_all is also safe to be called from the oom reaper
context because it doesn't sit on any locks but mmap_sem (for read).
Signed-off-by: Michal Hocko <mhocko@...e.com>
Cc: Andrea Argangeli <andrea@...nel.org>
Acked-by: David Rientjes <rientjes@...gle.com>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Oleg Nesterov <oleg@...hat.com>
Cc: Rik van Riel <riel@...hat.com>
Cc: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
---
mm/oom_kill.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff -puN mm/oom_kill.c~oom-reaper-handle-mlocked-pages mm/oom_kill.c
--- a/mm/oom_kill.c~oom-reaper-handle-mlocked-pages
+++ a/mm/oom_kill.c
@@ -442,13 +442,6 @@ static bool __oom_reap_vmas(struct mm_st
continue;
/*
- * mlocked VMAs require explicit munlocking before unmap.
- * Let's keep it simple here and skip such VMAs.
- */
- if (vma->vm_flags & VM_LOCKED)
- continue;
-
- /*
* Only anonymous pages have a good chance to be dropped
* without additional steps which we cannot afford as we
* are OOM already.
@@ -458,9 +451,12 @@ static bool __oom_reap_vmas(struct mm_st
* we do not want to block exit_mmap by keeping mm ref
* count elevated without a good reason.
*/
- if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))
+ if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {
+ if (vma->vm_flags & VM_LOCKED)
+ munlock_vma_pages_all(vma);
unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,
&details);
+ }
}
tlb_finish_mmu(&tlb, 0, -1);
up_read(&mm->mmap_sem);
_
Powered by blists - more mailing lists