lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170829190526.8767-1-jglisse@redhat.com>
Date:   Tue, 29 Aug 2017 15:05:26 -0400
From:   Jérôme Glisse <jglisse@...hat.com>
To:     linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc:     Jérôme Glisse <jglisse@...hat.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Bernhard Held <berny156@....de>,
        Adam Borowski <kilobyte@...band.pl>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Wanpeng Li <kernellwp@...il.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Takashi Iwai <tiwai@...e.de>,
        Nadav Amit <nadav.amit@...il.com>,
        Mike Galbraith <efault@....de>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        axie <axie@....com>, Andrew Morton <akpm@...ux-foundation.org>
Subject: [RFC PATCH] mm/rmap: do not call mmu_notifier_invalidate_page() v3

Some MMU notifier need to be able to sleep during callback. This was
broken by c7ab0d2fdc84 ("mm: convert try_to_unmap_one() to use
page_vma_mapped_walk()").

This patch restore the sleep ability and properly capture the range of
address that needs to be invalidated.

Relevent threads:
https://lkml.kernel.org/r/20170809204333.27485-1-jglisse@redhat.com
https://lkml.kernel.org/r/20170804134928.l4klfcnqatni7vsc@black.fi.intel.com
https://marc.info/?l=kvm&m=150327081325160&w=2

Signed-off-by: Jérôme Glisse <jglisse@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Bernhard Held <berny156@....de>
Cc: Adam Borowski <kilobyte@...band.pl>
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>
Cc: Wanpeng Li <kernellwp@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Takashi Iwai <tiwai@...e.de>
Cc: Nadav Amit <nadav.amit@...il.com>
Cc: Mike Galbraith <efault@....de>
Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: axie <axie@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
---
 mm/rmap.c | 26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index c8993c63eb25..0b25b720f494 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -888,6 +888,8 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 		.flags = PVMW_SYNC,
 	};
 	int *cleaned = arg;
+	bool invalidate = false;
+	unsigned long start = address, end = address;
 
 	while (page_vma_mapped_walk(&pvmw)) {
 		int ret = 0;
@@ -905,6 +907,9 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 			entry = pte_mkclean(entry);
 			set_pte_at(vma->vm_mm, address, pte, entry);
 			ret = 1;
+			invalidate = true;
+			/* range is exclusive */
+			end = pvmw.address + PAGE_SIZE;
 		} else {
 #ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
 			pmd_t *pmd = pvmw.pmd;
@@ -919,18 +924,22 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 			entry = pmd_mkclean(entry);
 			set_pmd_at(vma->vm_mm, address, pmd, entry);
 			ret = 1;
+			invalidate = true;
+			/* range is exclusive */
+			end = pvmw.address + PAGE_SIZE;
 #else
 			/* unexpected pmd-mapped page? */
 			WARN_ON_ONCE(1);
 #endif
 		}
 
-		if (ret) {
-			mmu_notifier_invalidate_page(vma->vm_mm, address);
+		if (ret)
 			(*cleaned)++;
-		}
 	}
 
+	if (invalidate)
+		mmu_notifier_invalidate_range(vma->vm_mm, start, end);
+
 	return true;
 }
 
@@ -1323,8 +1332,9 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 	};
 	pte_t pteval;
 	struct page *subpage;
-	bool ret = true;
+	bool ret = true, invalidate = false;
 	enum ttu_flags flags = (enum ttu_flags)arg;
+	unsigned long start = address, end = address;
 
 	/* munlock has nothing to gain from examining un-locked vmas */
 	if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED))
@@ -1490,8 +1500,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 discard:
 		page_remove_rmap(subpage, PageHuge(page));
 		put_page(page);
-		mmu_notifier_invalidate_page(mm, address);
+		invalidate = true;
+		/* range is exclusive */
+		end = address + PAGE_SIZE;
 	}
+
+	if (invalidate)
+		mmu_notifier_invalidate_range(mm, start, end);
+
 	return ret;
 }
 
-- 
2.13.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ