[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240229232211.161961-5-samuel.holland@sifive.com>
Date: Thu, 29 Feb 2024 15:21:45 -0800
From: Samuel Holland <samuel.holland@...ive.com>
To: Palmer Dabbelt <palmer@...belt.com>,
linux-riscv@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
Alexandre Ghiti <alexghiti@...osinc.com>,
Jisheng Zhang <jszhang@...nel.org>,
Yunhui Cui <cuiyunhui@...edance.com>,
Samuel Holland <samuel.holland@...ive.com>
Subject: [PATCH v5 04/13] riscv: mm: Broadcast kernel TLB flushes only when needed
__flush_tlb_range() avoids broadcasting TLB flushes when an mm context
is only active on the local CPU. Apply this same optimization to TLB
flushes of kernel memory when only one CPU is online. This check can be
constant-folded when SMP is disabled.
Reviewed-by: Alexandre Ghiti <alexghiti@...osinc.com>
Signed-off-by: Samuel Holland <samuel.holland@...ive.com>
---
(no changes since v4)
Changes in v4:
- New patch for v4
arch/riscv/mm/tlbflush.c | 18 +++++-------------
1 file changed, 5 insertions(+), 13 deletions(-)
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index 0373661bd1c4..8cdb082f00ca 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -102,22 +102,15 @@ static void __flush_tlb_range(struct cpumask *cmask, unsigned long asid,
unsigned long start, unsigned long size,
unsigned long stride)
{
- bool broadcast;
+ unsigned int cpu;
if (cpumask_empty(cmask))
return;
- if (cmask != cpu_online_mask) {
- unsigned int cpuid;
+ cpu = get_cpu();
- cpuid = get_cpu();
- /* check if the tlbflush needs to be sent to other CPUs */
- broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids;
- } else {
- broadcast = true;
- }
-
- if (!broadcast) {
+ /* Check if the TLB flush needs to be sent to other CPUs. */
+ if (cpumask_any_but(cmask, cpu) >= nr_cpu_ids) {
local_flush_tlb_range_asid(start, size, stride, asid);
} else if (riscv_use_sbi_for_rfence()) {
sbi_remote_sfence_vma_asid(cmask, start, size, asid);
@@ -131,8 +124,7 @@ static void __flush_tlb_range(struct cpumask *cmask, unsigned long asid,
on_each_cpu_mask(cmask, __ipi_flush_tlb_range_asid, &ftd, 1);
}
- if (cmask != cpu_online_mask)
- put_cpu();
+ put_cpu();
}
static inline unsigned long get_mm_asid(struct mm_struct *mm)
--
2.43.1
Powered by blists - more mailing lists