lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 12 Jan 2015 19:24:51 +0000
From:	Will Deacon <will.deacon@....com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Laszlo Ersek <lersek@...hat.com>,
	Mark Langsdorf <mlangsdo@...hat.com>,
	Marc Zyngier <Marc.Zyngier@....com>,
	Mark Rutland <Mark.Rutland@....com>,
	Steve Capper <steve.capper@...aro.org>,
	"vishnu.ps@...sung.com" <vishnu.ps@...sung.com>,
	main kernel list <linux-kernel@...r.kernel.org>,
	arm kernel list <linux-arm-kernel@...ts.infradead.org>,
	Kyle McMartin <kmcmarti@...hat.com>,
	Dave Hansen <dave@...1.net>
Subject: Re: Linux 3.19-rc3

On Mon, Jan 12, 2015 at 07:07:12PM +0000, Linus Torvalds wrote:
> On Tue, Jan 13, 2015 at 8:06 AM, Linus Torvalds
> <torvalds@...ux-foundation.org> wrote:
> >
> > So I'm ok with it, as long as we don't have a performance regression.
> >
> > Your "don't bother freeing when the batch is empty" should hopefully
> > be fine. Dave, does that work for your case?
> 
> Oh, and Dave just replied that it's ok. So should I just take it
> directly, or expect it through the arm64 tree? Either works for me.

Although I do have a couple of arm64 fixes on the radar, it'd be quicke
if you just take the patch. I added a commit log/SoB below.

Cheers,

Will

--->8

>From bcf792ffc9ce29415261d2055954b883c5bec978 Mon Sep 17 00:00:00 2001
From: Will Deacon <will.deacon@....com>
Date: Mon, 12 Jan 2015 19:10:55 +0000
Subject: [PATCH] mm: mmu_gather: use tlb->end != 0 only for TLB invalidation

When batching up address ranges for TLB invalidation, we check tlb->end
!= 0 to indicate that some pages have actually been unmapped.

As of commit f045bbb9fa1b ("mmu_gather: fix over-eager
tlb_flush_mmu_free() calling"), we use the same check for freeing these
pages in order to avoid a performance regression where we call
free_pages_and_swap_cache even when no pages are actually queued up.

Unfortunately, the range could have been reset (tlb->end = 0) by
tlb_end_vma, which has been shown to cause memory leaks on arm64.
Furthermore, investigation into these leaks revealed that the fullmm
case on task exit no longer invalidates the TLB, by virtue of tlb->end
 == 0 (in 3.18, need_flush would have been set).

This patch resolves the problem by reverting f045bbb9fa1b, using
tlb->local.nr as the predicate for page freeing in tlb_flush_mmu_free
and ensuring that tlb->end is initialised to a non-zero value in the
fullmm case.

Tested-by: Mark Langsdorf <mlangsdo@...hat.com>
Tested-by: Dave Hansen <dave@...1.net>
Signed-off-by: Will Deacon <will.deacon@....com>
---
 include/asm-generic/tlb.h | 8 ++++++--
 mm/memory.c               | 8 ++++----
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 08848050922e..db284bff29dc 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -136,8 +136,12 @@ static inline void __tlb_adjust_range(struct mmu_gather *tlb,
 
 static inline void __tlb_reset_range(struct mmu_gather *tlb)
 {
-	tlb->start = TASK_SIZE;
-	tlb->end = 0;
+	if (tlb->fullmm) {
+		tlb->start = tlb->end = ~0;
+	} else {
+		tlb->start = TASK_SIZE;
+		tlb->end = 0;
+	}
 }
 
 /*
diff --git a/mm/memory.c b/mm/memory.c
index c6565f00fb38..54f3a9b00956 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -235,6 +235,9 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long
 
 static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
 {
+	if (!tlb->end)
+		return;
+
 	tlb_flush(tlb);
 	mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
 #ifdef CONFIG_HAVE_RCU_TABLE_FREE
@@ -247,7 +250,7 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
 {
 	struct mmu_gather_batch *batch;
 
-	for (batch = &tlb->local; batch; batch = batch->next) {
+	for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
 		free_pages_and_swap_cache(batch->pages, batch->nr);
 		batch->nr = 0;
 	}
@@ -256,9 +259,6 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
 
 void tlb_flush_mmu(struct mmu_gather *tlb)
 {
-	if (!tlb->end)
-		return;
-
 	tlb_flush_mmu_tlbonly(tlb);
 	tlb_flush_mmu_free(tlb);
 }
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ