lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1557264889-109594-1-git-send-email-yang.shi@linux.alibaba.com>
Date:   Wed,  8 May 2019 05:34:49 +0800
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     jstancek@...hat.com, will.deacon@....com, akpm@...ux-foundation.org
Cc:     yang.shi@...ux.alibaba.com, stable@...r.kernel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.

__tlb_reset_range() is used to reset all these page table state to
unchanged, which is called by TLB flush for parallel mapping changes for
the same range under non-exclusive lock (i.e. read mmap_sem).  Before
commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in
munmap"), MADV_DONTNEED is the only one who may do page zapping in
parallel and it doesn't remove page tables.  But, the forementioned commit
may do munmap() under read mmap_sem and free page tables.  This causes a
bug [1] reported by Jan Stancek since __tlb_reset_range() may pass the
wrong page table state to architecture specific TLB flush operations.

So, removing __tlb_reset_range() sounds sane.  This may cause more TLB
flush for MADV_DONTNEED, but it should be not called very often, hence
the impact should be negligible.

The original proposed fix came from Jan Stancek who mainly debugged this
issue, I just wrapped up everything together.

[1] https://lore.kernel.org/linux-mm/342bf1fd-f1bf-ed62-1127-e911b5032274@linux.alibaba.com/T/#m7a2ab6c878d5a256560650e56189cfae4e73217f

Reported-by: Jan Stancek <jstancek@...hat.com>
Tested-by: Jan Stancek <jstancek@...hat.com>
Cc: Will Deacon <will.deacon@....com>
Cc: stable@...r.kernel.org
Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
Signed-off-by: Jan Stancek <jstancek@...hat.com>
---
 mm/mmu_gather.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 99740e1..9fd5272 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -249,11 +249,12 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
 	 * flush by batching, a thread has stable TLB entry can fail to flush
 	 * the TLB by observing pte_none|!pte_dirty, for example so flush TLB
 	 * forcefully if we detect parallel PTE batching threads.
+	 *
+	 * munmap() may change mapping under non-excluse lock and also free
+	 * page tables.  Do not call __tlb_reset_range() for it.
 	 */
-	if (mm_tlb_flush_nested(tlb->mm)) {
-		__tlb_reset_range(tlb);
+	if (mm_tlb_flush_nested(tlb->mm))
 		__tlb_adjust_range(tlb, start, end - start);
-	}
 
 	tlb_flush_mmu(tlb);
 
-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ