lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210131001132.3368247-18-namit@vmware.com>
Date:   Sat, 30 Jan 2021 16:11:29 -0800
From:   Nadav Amit <nadav.amit@...il.com>
To:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc:     Nadav Amit <namit@...are.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
        x86@...nel.org
Subject: [RFC 17/20] mm/tlb: updated completed deferred TLB flush conditionally

From: Nadav Amit <namit@...are.com>

If all the deferred TLB flushes were completed, there is no need to
update the completed TLB flush. This update requires an atomic cmpxchg,
so we would like to skip it.

To do so, save for each mm the last TLB generation in which TLB flushes
were deferred. While saving this information requires another atomic
cmpxchg, assume that deferred TLB flushes are less frequent than TLB
flushes.

Signed-off-by: Nadav Amit <namit@...are.com>
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Will Deacon <will@...nel.org>
Cc: Yu Zhao <yuzhao@...gle.com>
Cc: x86@...nel.org
---
 include/asm-generic/tlb.h | 23 ++++++++++++++++++-----
 include/linux/mm_types.h  |  5 +++++
 2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 74dbb56d816d..a41af03fbede 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -536,6 +536,14 @@ static inline void tlb_update_generation(atomic64_t *gen, u64 new_gen)
 
 static inline void mark_mm_tlb_gen_done(struct mm_struct *mm, u64 gen)
 {
+	/*
+	 * If we all the deferred TLB generations were completed, we can skip
+	 * the update of tlb_gen_completed and save a few cycles on cmpxchg.
+	 */
+	if (atomic64_read(&mm->tlb_gen_deferred) ==
+	    atomic64_read(&mm->tlb_gen_completed))
+		return;
+
 	/*
 	 * Update the completed generation to the new generation if the new
 	 * generation is greater than the previous one.
@@ -546,7 +554,7 @@ static inline void mark_mm_tlb_gen_done(struct mm_struct *mm, u64 gen)
 static inline void read_defer_tlb_flush_gen(struct mmu_gather *tlb)
 {
 	struct mm_struct *mm = tlb->mm;
-	u64 mm_gen;
+	u64 mm_gen, new_gen;
 
 	/*
 	 * Any change of PTE before calling __track_deferred_tlb_flush() must be
@@ -567,11 +575,16 @@ static inline void read_defer_tlb_flush_gen(struct mmu_gather *tlb)
 	 * correctness issues, and should not induce overheads, since anyhow in
 	 * TLB storms it is better to perform full TLB flush.
 	 */
-	if (mm_gen != tlb->defer_gen) {
-		VM_BUG_ON(mm_gen < tlb->defer_gen);
+	if (mm_gen == tlb->defer_gen)
+		return;
 
-		tlb->defer_gen = inc_mm_tlb_gen(mm);
-	}
+	VM_BUG_ON(mm_gen < tlb->defer_gen);
+
+	new_gen = inc_mm_tlb_gen(mm);
+	tlb->defer_gen = new_gen;
+
+	/* Update mm->tlb_gen_deferred */
+	tlb_update_generation(&mm->tlb_gen_deferred, new_gen);
 }
 
 #ifndef CONFIG_PER_TABLE_DEFERRED_FLUSHES
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index cae9e8bbf8e6..4122a9b8b56f 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -578,6 +578,11 @@ struct mm_struct {
 		 */
 		atomic64_t tlb_gen;
 
+		/*
+		 * The last TLB generation which was deferred.
+		 */
+		atomic64_t tlb_gen_deferred;
+
 		/*
 		 * TLB generation which is guarnateed to be flushed, including
 		 * all the PTE changes that were performed before tlb_gen was
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ