lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 28 Oct 2014 11:44:22 +0000
From:	Will Deacon <will.deacon@....com>
To:	torvalds@...ux-foundation.org, peterz@...radead.org
Cc:	linux-kernel@...r.kernel.org, linux@....linux.org.uk,
	benh@...nel.crashing.org, Will Deacon <will.deacon@....com>
Subject: [RFC PATCH 2/2] zap_pte_range: fix partial TLB flushing in response to a dirty pte

When we encounter a dirty page during unmap, we force a TLB invalidation
to avoid a race with pte_mkclean and stale, dirty TLB entries in the
CPU.

This uses the same force_flush logic as the batch failure code, but
since we don't break out of the loop when finding a dirty pte, tlb->end
can be < addr as we only batch for present ptes. This can result in a
negative range being passed to subsequent TLB invalidation calls,
potentially leading to massive over-invalidation of the TLB (observed
in practice running firefox on arm64).

This patch fixes the issue by restricting the use of addr in the TLB
range calculations. The first range then ends up covering tlb->start to
min(tlb->end, addr), which corresponds to the currently batched range.
The second range then covers anything remaining, which may still lead to
a (much reduced) over-invalidation of the TLB.

Signed-off-by: Will Deacon <will.deacon@....com>
---
 mm/memory.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 3e503831e042..ea41508d41f3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1194,11 +1194,10 @@ again:
 		 * then update the range to be the remaining
 		 * TLB range.
 		 */
-		old_end = tlb->end;
-		tlb->end = addr;
+		tlb->end = old_end = min(tlb->end, addr);
 		tlb_flush_mmu_tlbonly(tlb);
-		tlb->start = addr;
-		tlb->end = old_end;
+		tlb->start = old_end;
+		tlb->end = end;
 	}
 	pte_unmap_unlock(start_pte, ptl);
 
-- 
2.1.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ