[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230727185553.980262-3-alexghiti@rivosinc.com>
Date: Thu, 27 Jul 2023 20:55:51 +0200
From: Alexandre Ghiti <alexghiti@...osinc.com>
To: Will Deacon <will@...nel.org>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Nick Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Mayuresh Chitale <mchitale@...tanamicro.com>,
Vincent Chen <vincent.chen@...ive.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>, linux-arch@...r.kernel.org,
linux-mm@...ck.org, linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org
Cc: Alexandre Ghiti <alexghiti@...osinc.com>
Subject: [PATCH v2 2/4] riscv: Improve flush_tlb_range() for hugetlb pages
flush_tlb_range() uses a fixed stride of PAGE_SIZE and in its current form,
when a hugetlb mapping needs to be flushed, flush_tlb_range() flushes the
whole tlb: so set a stride of the size of the hugetlb mapping in order to
only flush the hugetlb mapping.
Note that THPs are directly handled by flush_pmd_tlb_range().
Signed-off-by: Alexandre Ghiti <alexghiti@...osinc.com>
---
arch/riscv/mm/tlbflush.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index fa03289853d8..3e4acef1f6bc 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -3,6 +3,7 @@
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/sched.h>
+#include <linux/hugetlb.h>
#include <asm/sbi.h>
#include <asm/mmu_context.h>
@@ -147,7 +148,14 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end)
{
- __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE);
+ unsigned long stride_shift;
+
+ stride_shift = is_vm_hugetlb_page(vma) ?
+ huge_page_shift(hstate_vma(vma)) :
+ PAGE_SHIFT;
+
+ __flush_tlb_range(vma->vm_mm,
+ start, end - start, 1 << stride_shift);
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
--
2.39.2
Powered by blists - more mailing lists