lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201804182049.EDJ21857.OHJOMOLFQVFFtS@I-love.SAKURA.ne.jp>
Date:   Wed, 18 Apr 2018 20:49:11 +0900
From:   Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:     mhocko@...nel.org, rientjes@...gle.com
Cc:     akpm@...ux-foundation.org, aarcange@...hat.com, guro@...com,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch v2] mm, oom: fix concurrent munlock and oom reaper unmap

Michal Hocko wrote:
> On Tue 17-04-18 19:52:41, David Rientjes wrote:
> > Since exit_mmap() is done without the protection of mm->mmap_sem, it is
> > possible for the oom reaper to concurrently operate on an mm until
> > MMF_OOM_SKIP is set.
> > 
> > This allows munlock_vma_pages_all() to concurrently run while the oom
> > reaper is operating on a vma.  Since munlock_vma_pages_range() depends on
> > clearing VM_LOCKED from vm_flags before actually doing the munlock to
> > determine if any other vmas are locking the same memory, the check for
> > VM_LOCKED in the oom reaper is racy.
> > 
> > This is especially noticeable on architectures such as powerpc where
> > clearing a huge pmd requires serialize_against_pte_lookup().  If the pmd
> > is zapped by the oom reaper during follow_page_mask() after the check for
> > pmd_none() is bypassed, this ends up deferencing a NULL ptl.
> > 
> > Fix this by reusing MMF_UNSTABLE to specify that an mm should not be
> > reaped.  This prevents the concurrent munlock_vma_pages_range() and
> > unmap_page_range().  The oom reaper will simply not operate on an mm that
> > has the bit set and leave the unmapping to exit_mmap().
> 
> This will further complicate the protocol and actually theoretically
> restores the oom lockup issues because the oom reaper doesn't set
> MMF_OOM_SKIP when racing with exit_mmap so we fully rely that nothing
> blocks there... So the resulting code is more fragile and tricky.
> 
> Can we try a simpler way and get back to what I was suggesting before
> [1] and simply not play tricks with
> 		down_write(&mm->mmap_sem);
> 		up_write(&mm->mmap_sem);
> 
> and use the write lock in exit_mmap for oom_victims?

You mean something like this?
Then, I'm tempted to call __oom_reap_task_mm() before holding mmap_sem for write.
It would be OK to call __oom_reap_task_mm() at the beginning of __mmput()...

diff --git a/mm/mmap.c b/mm/mmap.c
index 188f195..ba7083b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3011,17 +3011,22 @@ void exit_mmap(struct mm_struct *mm)
 	struct mmu_gather tlb;
 	struct vm_area_struct *vma;
 	unsigned long nr_accounted = 0;
+	const bool is_oom_mm = mm_is_oom_victim(mm);
 
 	/* mm's last user has gone, and its about to be pulled down */
 	mmu_notifier_release(mm);
 
 	if (mm->locked_vm) {
+		if (is_oom_mm)
+			down_write(&mm->mmap_sem);
 		vma = mm->mmap;
 		while (vma) {
 			if (vma->vm_flags & VM_LOCKED)
 				munlock_vma_pages_all(vma);
 			vma = vma->vm_next;
 		}
+		if (is_oom_mm)
+			up_write(&mm->mmap_sem);
 	}
 
 	arch_exit_mmap(mm);
@@ -3037,7 +3042,7 @@ void exit_mmap(struct mm_struct *mm)
 	/* Use -1 here to ensure all VMAs in the mm are unmapped */
 	unmap_vmas(&tlb, vma, 0, -1);
 
-	if (unlikely(mm_is_oom_victim(mm))) {
+	if (unlikely(is_oom_mm)) {
 		/*
 		 * Wait for oom_reap_task() to stop working on this
 		 * mm. Because MMF_OOM_SKIP is already set before

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ