lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170814012617.GB25427@bbox>
Date:   Mon, 14 Aug 2017 10:26:17 +0900
From:   Minchan Kim <minchan@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Nadav Amit <namit@...are.com>, linux-mm@...ck.org,
        nadav.amit@...il.com, linux-kernel@...r.kernel.org,
        akpm@...ux-foundation.org, Ingo Molnar <mingo@...hat.com>,
        Russell King <linux@...linux.org.uk>,
        Tony Luck <tony.luck@...el.com>,
        Martin Schwidefsky <schwidefsky@...ibm.com>,
        "David S. Miller" <davem@...emloft.net>,
        Heiko Carstens <heiko.carstens@...ibm.com>,
        Yoshinori Sato <ysato@...rs.sourceforge.jp>,
        Jeff Dike <jdike@...toit.com>, linux-arch@...r.kernel.org
Subject: Re: [PATCH v6 6/7] mm: fix MADV_[FREE|DONTNEED] TLB flush miss
 problem

Hi Peter,

On Fri, Aug 11, 2017 at 03:30:20PM +0200, Peter Zijlstra wrote:
> On Tue, Aug 01, 2017 at 05:08:17PM -0700, Nadav Amit wrote:
> >  void tlb_finish_mmu(struct mmu_gather *tlb,
> >  		unsigned long start, unsigned long end)
> >  {
> > -	arch_tlb_finish_mmu(tlb, start, end);
> > +	/*
> > +	 * If there are parallel threads are doing PTE changes on same range
> > +	 * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB
> > +	 * flush by batching, a thread has stable TLB entry can fail to flush
> > +	 * the TLB by observing pte_none|!pte_dirty, for example so flush TLB
> > +	 * forcefully if we detect parallel PTE batching threads.
> > +	 */
> > +	bool force = mm_tlb_flush_nested(tlb->mm);
> > +
> > +	arch_tlb_finish_mmu(tlb, start, end, force);
> >  }
> 
> I don't understand the comment nor the ordering. What guarantees we see
> the increment if we need to?

How about this about commenting part?

>From 05f06fd6aba14447a9ca2df8b810fbcf9a58e14b Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@...nel.org>
Date: Mon, 14 Aug 2017 10:16:56 +0900
Subject: [PATCH] mm: add describable comment for TLB batch race

[1] is a rather subtle/complicated bug so that it's hard to
understand it with limited code comment.

This patch adds a sequence diagaram to explain the problem
more easily, I hope.

[1] 99baac21e458, mm: fix MADV_[FREE|DONTNEED] TLB flush miss problem

Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Nadav Amit <namit@...are.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
 mm/memory.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index bcbe56f52163..f571b0eb9816 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -413,12 +413,37 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
 void tlb_finish_mmu(struct mmu_gather *tlb,
 		unsigned long start, unsigned long end)
 {
+
+
 	/*
 	 * If there are parallel threads are doing PTE changes on same range
 	 * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB
 	 * flush by batching, a thread has stable TLB entry can fail to flush
 	 * the TLB by observing pte_none|!pte_dirty, for example so flush TLB
 	 * forcefully if we detect parallel PTE batching threads.
+	 *
+	 * Example: MADV_DONTNEED stale TLB problem on same range
+	 *
+	 * CPU 0				CPU 1
+	 * *a = 1;
+	 *					MADV_DONTNEED
+	 * MADV_DONTNEED			tlb_gather_mmu
+	 * tlb_gather_mmu
+	 * down_read(mmap_sem)			down_read(mmap_sem)
+	 *					pte_lock
+	 *					pte_get_and_clear
+	 *					tlb_remove_tlb_entry
+	 *					pte_unlock
+	 * pte_lock
+	 * found out the pte is none
+	 * pte_unlock
+	 * tlb_finish_mmu doesn't flush
+	 *
+	 * Access the address with stale TLB
+	 * *a = 2;ie, success without segfault
+	 *					tlb_finish_mmu flush on range
+	 *					but it is too late.
+	 *
 	 */
 	bool force = mm_tlb_flush_nested(tlb->mm);
 
-- 
2.7.4


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ