[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231228084642.1765-2-jszhang@kernel.org>
Date: Thu, 28 Dec 2023 16:46:41 +0800
From: Jisheng Zhang <jszhang@...nel.org>
To: Will Deacon <will@...nel.org>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Nick Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Catalin Marinas <catalin.marinas@....com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Arnd Bergmann <arnd@...db.de>
Cc: linux-arch@...r.kernel.org,
linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org,
Nadav Amit <namit@...are.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Yu Zhao <yuzhao@...gle.com>,
x86@...nel.org
Subject: [PATCH 1/2] mm/tlb: fix fullmm semantics
From: Nadav Amit <namit@...are.com>
fullmm in mmu_gather is supposed to indicate that the mm is torn-down
(e.g., on process exit) and can therefore allow certain optimizations.
However, tlb_finish_mmu() sets fullmm, when in fact it want to say that
the TLB should be fully flushed.
Change tlb_finish_mmu() to set need_flush_all and check this flag in
tlb_flush_mmu_tlbonly() when deciding whether a flush is needed.
At the same time, bring the arm64 fullmm on process exit optimization back.
Signed-off-by: Nadav Amit <namit@...are.com>
Signed-off-by: Jisheng Zhang <jszhang@...nel.org>
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Will Deacon <will@...nel.org>
Cc: Yu Zhao <yuzhao@...gle.com>
Cc: Nick Piggin <npiggin@...il.com>
Cc: x86@...nel.org
---
arch/arm64/include/asm/tlb.h | 5 ++++-
include/asm-generic/tlb.h | 2 +-
mm/mmu_gather.c | 2 +-
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index 846c563689a8..6164c5f3b78f 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -62,7 +62,10 @@ static inline void tlb_flush(struct mmu_gather *tlb)
* invalidating the walk-cache, since the ASID allocator won't
* reallocate our ASID without invalidating the entire TLB.
*/
- if (tlb->fullmm) {
+ if (tlb->fullmm)
+ return;
+
+ if (tlb->need_flush_all) {
if (!last_level)
flush_tlb_mm(tlb->mm);
return;
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 129a3a759976..f2d46357bcbb 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -452,7 +452,7 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
* these bits.
*/
if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds ||
- tlb->cleared_puds || tlb->cleared_p4ds))
+ tlb->cleared_puds || tlb->cleared_p4ds || tlb->need_flush_all))
return;
tlb_flush(tlb);
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 4f559f4ddd21..79298bac3481 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -384,7 +384,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb)
* On x86 non-fullmm doesn't yield significant difference
* against fullmm.
*/
- tlb->fullmm = 1;
+ tlb->need_flush_all = 1;
__tlb_reset_range(tlb);
tlb->freed_tables = 1;
}
--
2.40.0
Powered by blists - more mailing lists