[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d9694393-d916-0d7f-8fce-ac656de544de@huawei.com>
Date: Mon, 31 Jul 2023 17:28:53 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Barry Song <21cnbao@...il.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <muchun.song@...ux.dev>,
Mina Almasry <almasrymina@...gle.com>, <kirill@...temov.name>,
<joel@...lfernandes.org>, <william.kucharski@...cle.com>,
<kaleshsingh@...gle.com>, <linux-mm@...ck.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/4] arm64: tlb: set huge page size to stride for hugepage
On 2023/7/31 16:43, Barry Song wrote:
> On Mon, Jul 31, 2023 at 4:33 PM Barry Song <21cnbao@...il.com> wrote:
>>
>> On Mon, Jul 31, 2023 at 4:14 PM Kefeng Wang <wangkefeng.wang@...wei.com> wrote:
>>>
>>> It is better to use huge_page_size() for hugepage(HugeTLB) instead of
>>> PAGE_SIZE for stride, which has been done in flush_pmd/pud_tlb_range(),
>>> it could reduce the loop in __flush_tlb_range().
>>>
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
>>> ---
>>> arch/arm64/include/asm/tlbflush.h | 21 +++++++++++----------
>>> 1 file changed, 11 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>>> index 412a3b9a3c25..25e35e6f8093 100644
>>> --- a/arch/arm64/include/asm/tlbflush.h
>>> +++ b/arch/arm64/include/asm/tlbflush.h
>>> @@ -360,16 +360,17 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
>>> dsb(ish);
>>> }
>>>
>>> -static inline void flush_tlb_range(struct vm_area_struct *vma,
>>> - unsigned long start, unsigned long end)
>>> -{
>>> - /*
>>> - * We cannot use leaf-only invalidation here, since we may be invalidating
>>> - * table entries as part of collapsing hugepages or moving page tables.
>>> - * Set the tlb_level to 0 because we can not get enough information here.
>>> - */
>>> - __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0);
>>> -}
>>> +/*
>>> + * We cannot use leaf-only invalidation here, since we may be invalidating
>>> + * table entries as part of collapsing hugepages or moving page tables.
>>> + * Set the tlb_level to 0 because we can not get enough information here.
>>> + */
>>> +#define flush_tlb_range(vma, start, end) \
>>> + __flush_tlb_range(vma, start, end, \
>>> + ((vma)->vm_flags & VM_HUGETLB) \
>>> + ? huge_page_size(hstate_vma(vma)) \
>>> + : PAGE_SIZE, false, 0)
>>> +
>>
>> seems like a good idea.
>>
>> I wonder if a better implementation will be MMU_GATHER_PAGE_SIZE, in this case,
>> we are going to support stride for other large folios as well, such as thp.
>>
>
> BTW, in most cases we have already had right stride:
>
> arch/arm64/include/asm/tlb.h has already this to get stride:
MMU_GATHER_PAGE_SIZE works for tlb_flush, but flush_tlb_range()
directly called without mmu_gather, see above 3 patches is to
use correct flush_[hugetlb/pmd/pud]_tlb_range(also there are
some other places, like get_clear_contig_flush/clear_flush on arm64),
so enable MMU_GATHER_PAGE_SIZE for arm64 is independent thing, right?
>
> static inline void tlb_flush(struct mmu_gather *tlb)
> {
> struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0);
> bool last_level = !tlb->freed_tables;
> unsigned long stride = tlb_get_unmap_size(tlb);
> int tlb_level = tlb_get_level(tlb);
>
> /*
> * If we're tearing down the address space then we only care about
> * invalidating the walk-cache, since the ASID allocator won't
> * reallocate our ASID without invalidating the entire TLB.
> */
> if (tlb->fullmm) {
> if (!last_level)
> flush_tlb_mm(tlb->mm);
> return;
> }
>
> __flush_tlb_range(&vma, tlb->start, tlb->end, stride,
> last_level, tlb_level);
> }
>
>>>
>>> static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
>>> {
>>> --
>>> 2.41.0
>>>
>>
>> Thanks
>> Barry
Powered by blists - more mailing lists