[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231228084642.1765-1-jszhang@kernel.org>
Date: Thu, 28 Dec 2023 16:46:40 +0800
From: Jisheng Zhang <jszhang@...nel.org>
To: Will Deacon <will@...nel.org>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Nick Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Catalin Marinas <catalin.marinas@....com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Arnd Bergmann <arnd@...db.de>
Cc: linux-arch@...r.kernel.org,
linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org
Subject: [PATCH 0/2] riscv: tlb: avoid tlb flushing on exit & execve
The mmu_gather code sets fullmm=1 when tearing down the entire address
space for an mm_struct on exit or execve. So if the underlying platform
supports ASID, the tlb flushing can be avoided because the ASID
allocator will never re-allocate a dirty ASID.
But currently, the tlb_finish_mmu() sets fullmm, when in fact it wants
to say that the TLB should be fully flushed.
So patch1 takes one of Nadav's patch from [1] to fix fullmm semantics.
Compared with original patch from[1], the differences are:
a. fixes the fullmm semantics in arm64 too
b. bring back the fullmm optimization back on arm64.
patch2 does the optimization on riscv.
Use the performance of Process creation in unixbench on T-HEAD TH1520
platform is improved by about 4%.
Link: https://lore.kernel.org/linux-mm/20210131001132.3368247-2-namit@vmware.com/ [1]
Jisheng Zhang (1):
riscv: tlb: avoid tlb flushing if fullmm == 1
Nadav Amit (1):
mm/tlb: fix fullmm semantics
arch/arm64/include/asm/tlb.h | 5 ++++-
arch/riscv/include/asm/tlb.h | 9 +++++++++
include/asm-generic/tlb.h | 2 +-
mm/mmu_gather.c | 2 +-
4 files changed, 15 insertions(+), 3 deletions(-)
--
2.40.0
Powered by blists - more mailing lists