[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171106085251.jwrpgne4dnl4gopy@dhcp22.suse.cz>
Date: Mon, 6 Nov 2017 09:52:51 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Bob Liu <lliubbo@...il.com>
Cc: Wang Nan <wangnan0@...wei.com>, Linux-MM <linux-mm@...ck.org>,
Linux-Kernel <linux-kernel@...r.kernel.org>,
Bob Liu <liubo95@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Ingo Molnar <mingo@...nel.org>, Roman Gushchin <guro@...com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Andrea Arcangeli <aarcange@...hat.com>, will.deacon@....com
Subject: Re: [RFC PATCH] mm, oom_reaper: gather each vma to prevent leaking
TLB entry
On Mon 06-11-17 15:04:40, Bob Liu wrote:
> On Mon, Nov 6, 2017 at 11:36 AM, Wang Nan <wangnan0@...wei.com> wrote:
> > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering all virtual memory space.
> > In this case, tlb->fullmm is true. Some archs like arm64 doesn't flush
> > TLB when tlb->fullmm is true:
> >
> > commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").
> >
>
> CC'ed Will Deacon.
>
> > Which makes leaking of tlb entries. For example, when oom_reaper
> > selects a task and reaps its virtual memory space, another thread
> > in this task group may still running on another core and access
> > these already freed memory through tlb entries.
No threads should be running in userspace by the time the reaper gets to
unmap their address space. So the only potential case is they are
accessing the user memory from the kernel when we should fault and we
have MMF_UNSTABLE to cause a SIGBUS. So is the race you are describing
real?
> > This patch gather each vma instead of gathering full vm space,
> > tlb->fullmm is not true. The behavior of oom reaper become similar
> > to munmapping before do_exit, which should be safe for all archs.
I do not have any objections to do per vma tlb flushing because it would
free gathered pages sooner but I am not sure I see any real problem
here. Have you seen any real issues or this is more of a review driven
fix?
> > Signed-off-by: Wang Nan <wangnan0@...wei.com>
> > Cc: Bob Liu <liubo95@...wei.com>
> > Cc: Michal Hocko <mhocko@...e.com>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Cc: Michal Hocko <mhocko@...e.com>
> > Cc: David Rientjes <rientjes@...gle.com>
> > Cc: Ingo Molnar <mingo@...nel.org>
> > Cc: Roman Gushchin <guro@...com>
> > Cc: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
> > Cc: Andrea Arcangeli <aarcange@...hat.com>
> > ---
> > mm/oom_kill.c | 7 ++++---
> > 1 file changed, 4 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > index dee0f75..18c5b35 100644
> > --- a/mm/oom_kill.c
> > +++ b/mm/oom_kill.c
> > @@ -532,7 +532,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
> > */
> > set_bit(MMF_UNSTABLE, &mm->flags);
> >
> > - tlb_gather_mmu(&tlb, mm, 0, -1);
> > for (vma = mm->mmap ; vma; vma = vma->vm_next) {
> > if (!can_madv_dontneed_vma(vma))
> > continue;
> > @@ -547,11 +546,13 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
> > * we do not want to block exit_mmap by keeping mm ref
> > * count elevated without a good reason.
> > */
> > - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))
> > + if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {
> > + tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end);
> > unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,
> > NULL);
> > + tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end);
> > + }
> > }
> > - tlb_finish_mmu(&tlb, 0, -1);
> > pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",
> > task_pid_nr(tsk), tsk->comm,
> > K(get_mm_counter(mm, MM_ANONPAGES)),
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists