[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230802012731.62512-1-wangkefeng.wang@huawei.com>
Date: Wed, 2 Aug 2023 09:27:31 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <muchun.song@...ux.dev>,
Mina Almasry <almasrymina@...gle.com>, <kirill@...temov.name>,
<joel@...lfernandes.org>, <william.kucharski@...cle.com>,
<kaleshsingh@...gle.com>, <linux-mm@...ck.org>
CC: <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <21cnbao@...il.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: [PATCH v4] arm64: hugetlb: enable __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
It is better to use huge page size instead of PAGE_SIZE
for stride when flush hugepage, which reduces the loop
in __flush_tlb_range().
Let's support arch's flush_hugetlb_tlb_range(), which is
used in hugetlb_unshare_all_pmds(), move_hugetlb_page_tables()
and hugetlb_change_protection() for now.
Note, for hugepages based on contiguous bit, it has to be
invalidated individually since the contiguous PTE bit is
just a hint, the hardware may or may not take it into account.
Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
---
v4: directly pass tlb_level to __flush_tlb_range() with PMD/PUD size,
suggested by Catalin
v3: add tlb_level hint by using flush_pud/pmd_tlb_range,
suggested by Catalin
arch/arm64/include/asm/hugetlb.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 6a4a1ab8eb23..a91d6219aa78 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -60,4 +60,19 @@ extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
#include <asm-generic/hugetlb.h>
+#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
+static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
+ unsigned long start,
+ unsigned long end)
+{
+ unsigned long stride = huge_page_size(hstate_vma(vma));
+
+ if (stride == PMD_SIZE)
+ __flush_tlb_range(vma, start, end, stride, false, 2);
+ else if (stride == PUD_SIZE)
+ __flush_tlb_range(vma, start, end, stride, false, 1);
+ else
+ __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0);
+}
+
#endif /* __ASM_HUGETLB_H */
--
2.41.0
Powered by blists - more mailing lists