[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240229232211.161961-14-samuel.holland@sifive.com>
Date: Thu, 29 Feb 2024 15:21:54 -0800
From: Samuel Holland <samuel.holland@...ive.com>
To: Palmer Dabbelt <palmer@...belt.com>,
linux-riscv@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
Alexandre Ghiti <alexghiti@...osinc.com>,
Jisheng Zhang <jszhang@...nel.org>,
Yunhui Cui <cuiyunhui@...edance.com>,
Samuel Holland <samuel.holland@...ive.com>
Subject: [PATCH v5 13/13] riscv: mm: Always use an ASID to flush mm contexts
Even if multiple ASIDs are not supported, using the single-ASID variant
of the sfence.vma instruction preserves TLB entries for global (kernel)
pages. So it is always more efficient to use the single-ASID code path.
Reviewed-by: Alexandre Ghiti <alexghiti@...osinc.com>
Signed-off-by: Samuel Holland <samuel.holland@...ive.com>
---
Changes in v5:
- Leave use_asid_allocator declared in asm/mmu_context.h
Changes in v4:
- There is now only one copy of __flush_tlb_range()
Changes in v2:
- Update both copies of __flush_tlb_range()
arch/riscv/mm/tlbflush.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index e194e14e5b2b..5b473588a985 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -108,8 +108,7 @@ static void __flush_tlb_range(struct cpumask *cmask, unsigned long asid,
static inline unsigned long get_mm_asid(struct mm_struct *mm)
{
- return static_branch_unlikely(&use_asid_allocator) ?
- cntx2asid(atomic_long_read(&mm->context.id)) : FLUSH_TLB_NO_ASID;
+ return cntx2asid(atomic_long_read(&mm->context.id));
}
void flush_tlb_mm(struct mm_struct *mm)
--
2.43.1
Powered by blists - more mailing lists