[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230911131224.61924-3-alexghiti@rivosinc.com>
Date: Mon, 11 Sep 2023 15:12:22 +0200
From: Alexandre Ghiti <alexghiti@...osinc.com>
To: Will Deacon <will@...nel.org>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Nick Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Mayuresh Chitale <mchitale@...tanamicro.com>,
Vincent Chen <vincent.chen@...ive.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>, linux-arch@...r.kernel.org,
linux-mm@...ck.org, linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org, Samuel Holland <samuel@...lland.org>,
Lad Prabhakar <prabhakar.csengg@...il.com>
Cc: Alexandre Ghiti <alexghiti@...osinc.com>,
Andrew Jones <ajones@...tanamicro.com>
Subject: [PATCH v4 2/4] riscv: Improve flush_tlb_range() for hugetlb pages
flush_tlb_range() uses a fixed stride of PAGE_SIZE and in its current form,
when a hugetlb mapping needs to be flushed, flush_tlb_range() flushes the
whole tlb: so set a stride of the size of the hugetlb mapping in order to
only flush the hugetlb mapping. However, if the hugepage is a NAPOT region,
all PTEs that constitute this mapping must be invalidated, so the stride
size must actually be the size of the PTE.
Note that THPs are directly handled by flush_pmd_tlb_range().
Signed-off-by: Alexandre Ghiti <alexghiti@...osinc.com>
Reviewed-by: Andrew Jones <ajones@...tanamicro.com>
---
arch/riscv/mm/tlbflush.c | 39 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 38 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index fa03289853d8..5bda6d4fed90 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -3,6 +3,7 @@
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/sched.h>
+#include <linux/hugetlb.h>
#include <asm/sbi.h>
#include <asm/mmu_context.h>
@@ -147,7 +148,43 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end)
{
- __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE);
+ unsigned long stride_size;
+
+ stride_size = is_vm_hugetlb_page(vma) ?
+ huge_page_size(hstate_vma(vma)) :
+ PAGE_SIZE;
+
+#ifdef CONFIG_RISCV_ISA_SVNAPOT
+ /*
+ * As stated in the privileged specification, every PTE in a NAPOT
+ * region must be invalidated, so reset the stride in that case.
+ */
+ if (has_svnapot()) {
+ unsigned long order, napot_size;
+
+ for_each_napot_order(order) {
+ napot_size = napot_cont_size(order);
+
+ if (stride_size != napot_size)
+ continue;
+
+ if (napot_size >= PGDIR_SIZE)
+ stride_size = PGDIR_SIZE;
+ else if (napot_size >= P4D_SIZE)
+ stride_size = P4D_SIZE;
+ else if (napot_size >= PUD_SIZE)
+ stride_size = PUD_SIZE;
+ else if (napot_size >= PMD_SIZE)
+ stride_size = PMD_SIZE;
+ else
+ stride_size = PAGE_SIZE;
+
+ break;
+ }
+ }
+#endif
+
+ __flush_tlb_range(vma->vm_mm, start, end - start, stride_size);
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
--
2.39.2
Powered by blists - more mailing lists