[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4b5a3cfb-e13d-4df4-c08a-fb176cc2cbf6@huawei.com>
Date: Tue, 1 Aug 2023 19:22:09 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Catalin Marinas <catalin.marinas@....com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Will Deacon <will@...nel.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <muchun.song@...ux.dev>,
Mina Almasry <almasrymina@...gle.com>, <kirill@...temov.name>,
<joel@...lfernandes.org>, <william.kucharski@...cle.com>,
<kaleshsingh@...gle.com>, <linux-mm@...ck.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <21cnbao@...il.com>
Subject: Re: [PATCH v2 2/2] arm64: hugetlb: enable
__HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
On 2023/8/1 19:09, Catalin Marinas wrote:
> On Tue, Aug 01, 2023 at 10:31:45AM +0800, Kefeng Wang wrote:
>> +#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
>> +static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
>> + unsigned long start,
>> + unsigned long end)
>> +{
>> + unsigned long stride = huge_page_size(hstate_vma(vma));
>> +
>> + if (stride != PMD_SIZE && stride != PUD_SIZE)
>> + stride = PAGE_SIZE;
>> + __flush_tlb_range(vma, start, end, stride, false, 0);
>
> We could use some hints here for the tlb_level (2 for pmd, 1 for pud).
> Regarding the last_level argument to __flush_tlb_range(), I think it
> needs to stay false since this function is also called on the
> hugetlb_unshare_pmds() path where the pud is cleared and needs
> invalidating.
> > That said, maybe you can rewrite it as a switch statement and call
> flush_pmd_tlb_range() or flush_pud_tlb_range() (just make sure these are
> defined when CONFIG_HUGETLBFS is enabled).
>
How about this way, not involved with thp ?
diff --git a/arch/arm64/include/asm/hugetlb.h
b/arch/arm64/include/asm/hugetlb.h
index e5c2e3dd9cf0..a7ce59d3388e 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -66,10 +66,22 @@ static inline void flush_hugetlb_tlb_range(struct
vm_area_struct *vma,
unsigned long end)
{
unsigned long stride = huge_page_size(hstate_vma(vma));
+ int tlb_level = 0;
- if (stride != PMD_SIZE && stride != PUD_SIZE)
+ switch (stride) {
+#ifndef __PAGETABLE_PMD_FOLDED
+ case PUD_SIZE:
+ tlb_level = 1;
+ break;
+#endif
+ case PMD_SIZE:
+ tlb_level = 2;
+ break;
+ default:
stride = PAGE_SIZE;
- __flush_tlb_range(vma, start, end, stride, false, 0);
+ }
+
+ __flush_tlb_range(vma, start, end, stride, false, tlb_level);
}
Powered by blists - more mailing lists