[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4xJ=VibK3FFGr5UHDQn+HW_o1_ZMiUjhgo8+i=UWa3UGQ@mail.gmail.com>
Date: Mon, 4 Dec 2023 05:41:27 +0800
From: Barry Song <21cnbao@...il.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: akpm@...ux-foundation.org, andreyknvl@...il.com,
anshuman.khandual@....com, ardb@...nel.org,
catalin.marinas@....com, david@...hat.com, dvyukov@...gle.com,
glider@...gle.com, james.morse@....com, jhubbard@...dia.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, mark.rutland@....com, maz@...nel.org,
oliver.upton@...ux.dev, ryabinin.a.a@...il.com,
suzuki.poulose@....com, vincenzo.frascino@....com,
wangkefeng.wang@...wei.com, will@...nel.org, willy@...radead.org,
yuzenghui@...wei.com, yuzhao@...gle.com, ziy@...dia.com
Subject: Re: [PATCH v2 14/14] arm64/mm: Add ptep_get_and_clear_full() to
optimize process teardown
On Thu, Nov 30, 2023 at 8:00 PM Ryan Roberts <ryan.roberts@....com> wrote:
>
> >> Just because you found a pte that maps a page from a large folio, that doesn't
> >> mean that all pages from the folio are mapped, and it doesn't mean they are
> >> mapped contiguously. We have to deal with partial munmap(), partial mremap()
> >> etc. We could split in these cases (and in future it might be sensible to try),
> >> but that can fail (due to GUP). So we still have to handle the corner case.
> >>
> >> But I can imagine doing a batched version of ptep_get_and_clear(), like I did
> >> for ptep_set_wrprotects(). And I think this would be an improvement.
> >>
> >> The reason I haven't done that so far, is because ptep_get_and_clear() returns
> >> the pte value when it was cleared and that's hard to do if batching due to the
> >> storage requirement. But perhaps you could just return the logical OR of the
> >> dirty and young bits across all ptes in the batch. The caller should be able to
> >> reconstitute the rest if it needs it?
> >>
> >> What do you think?
> >
> > I really don't know why we care about the return value of ptep_get_and_clear()
> > as zap_pte_range() doesn't ask for any ret value at all. so why not totally give
> > up this kind of complex logical OR of dirty and young as they are useless in
> > this case?
>
> That's not the case in v6.7-rc1:
>
>
> static unsigned long zap_pte_range(struct mmu_gather *tlb,
> struct vm_area_struct *vma, pmd_t *pmd,
> unsigned long addr, unsigned long end,
> struct zap_details *details)
> {
> ...
>
> do {
> pte_t ptent = ptep_get(pte);
>
> ...
>
> if (pte_present(ptent)) {
> ...
>
> ptent = ptep_get_and_clear_full(mm, addr, pte,
> tlb->fullmm);
> arch_check_zapped_pte(vma, ptent);
> tlb_remove_tlb_entry(tlb, pte, addr);
> zap_install_uffd_wp_if_needed(vma, addr, pte, details,
> ptent);
> if (unlikely(!page)) {
> ksm_might_unmap_zero_page(mm, ptent);
> continue;
> }
>
> delay_rmap = 0;
> if (!PageAnon(page)) {
> if (pte_dirty(ptent)) {
> set_page_dirty(page);
> if (tlb_delay_rmap(tlb)) {
> delay_rmap = 1;
> force_flush = 1;
> }
> }
> if (pte_young(ptent) && likely(vma_has_recency(vma)))
> mark_page_accessed(page);
> }
>
> ...
> }
>
> ...
> } while (pte++, addr += PAGE_SIZE, addr != end);
>
> ...
> }
>
> Most importantly, file-backed mappings need the access/dirty bits to propagate that information back to the folio, so it will be written back to disk. x86 is also looking at the dirty bit in arch_check_zapped_pte(), and ksm is using it in ksm_might_unmap_zero_page().
>
> Probably for your use case of anon memory on arm64 on a phone, you don't need the return value. But my solution is also setting cotnpte for file-backed memory, and there are performance wins to be had there, especially for executable mappings where contpte reduces iTLB pressure. (I have other work which ensures these file-backed mappings are in correctly-sized large folios).
>
> So I don't think we can just do a clear without the get part. But I think I have a solution in the shape of clear_ptes(), as described in the other thread, which gives the characteristics you suggest.
understood. i realized OPPO's code actually also returned logic OR of
dirty and access bits while it exposes
CONTPTE to mm-core [1]:
static pte_t get_clear_flush(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep,
unsigned long pgsize,
unsigned long ncontig,
bool flush)
{
pte_t orig_pte = ptep_get(ptep);
bool valid = pte_valid(orig_pte);
unsigned long i, saddr = addr;
for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
pte_t pte = ptep_get_and_clear(mm, addr, ptep);
if (pte_dirty(pte))
orig_pte = pte_mkdirty(orig_pte);
if (pte_young(pte))
orig_pte = pte_mkyoung(orig_pte);
}
if (valid && flush) {
struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
flush_tlb_range(&vma, saddr, addr);
}
return orig_pte;
}
static inline pte_t __cont_pte_huge_ptep_get_and_clear_flush(struct
mm_struct *mm,
unsigned long addr,
pte_t *ptep,
bool flush)
{
pte_t orig_pte = ptep_get(ptep);
CHP_BUG_ON(!pte_cont(orig_pte));
CHP_BUG_ON(!IS_ALIGNED(addr, HPAGE_CONT_PTE_SIZE));
CHP_BUG_ON(!IS_ALIGNED(pte_pfn(orig_pte), HPAGE_CONT_PTE_NR));
return get_clear_flush(mm, addr, ptep, PAGE_SIZE, CONT_PTES, flush);
}
[1] https://github.com/OnePlusOSS/android_kernel_oneplus_sm8550/blob/oneplus/sm8550_u_14.0.0_oneplus11/mm/cont_pte_hugepage.c#L1421
>
>
> >
> > Is it possible for us to introduce a new api like?
> >
> > bool clear_folio_ptes(folio, ptep)
> > {
> > if(ptes are contiguous mapped) {
> > clear all ptes all together // this also clears all CONTPTE
> > return true;
> > }
> > return false;
> > }
> >
> > in zap_pte_range():
> >
> > if (large_folio(folio) && clear_folio_ptes(folio, ptep)) {
> > addr += nr - 1
> > pte += nr -1
> > } else
> > old path.
> >
> >
> >>
> >>>
> >>> zap_pte_range is the most frequent behaviour from userspace libc heap
> >>> as i explained
> >>> before. libc can call madvise(DONTNEED) the most often. It is crucial
> >>> to performance.
> >>>
> >>> and this way can also help drop your full version by moving to full
> >>> flushing the whole
> >>> large folios? and we don't need to depend on fullmm any more?
> >>>
> >>>>
> >>>> I don't think there is any correctness issue here. But there is a problem with
> >>>> fragility, as raised by Alistair. I have some ideas on potentially how to solve
> >>>> that. I'm going to try to work on it this afternoon and will post if I get some
> >>>> confidence that it is a real solution.
> >>>>
> >>>> Thanks,
> >>>> Ryan
> >>>>
> >>>>>
> >>>>> static inline pte_t __cont_pte_huge_ptep_get_and_clear_flush(struct mm_struct *mm,
> >>>>> unsigned long addr,
> >>>>> pte_t *ptep,
> >>>>> bool flush)
> >>>>> {
> >>>>> pte_t orig_pte = ptep_get(ptep);
> >>>>>
> >>>>> CHP_BUG_ON(!pte_cont(orig_pte));
> >>>>> CHP_BUG_ON(!IS_ALIGNED(addr, HPAGE_CONT_PTE_SIZE));
> >>>>> CHP_BUG_ON(!IS_ALIGNED(pte_pfn(orig_pte), HPAGE_CONT_PTE_NR));
> >>>>>
> >>>>> return get_clear_flush(mm, addr, ptep, PAGE_SIZE, CONT_PTES, flush);
> >>>>> }
> >>>>>
> >>>>> [1] https://github.com/OnePlusOSS/android_kernel_oneplus_sm8550/blob/oneplus/sm8550_u_14.0.0_oneplus11/mm/memory.c#L1539
> >>>>>
> >>>>>> + */
> >>>>>> +
> >>>>>> + return __ptep_get_and_clear(mm, addr, ptep);
> >>>>>> +}
> >>>>>> +EXPORT_SYMBOL(contpte_ptep_get_and_clear_full);
> >>>>>> +
> >>>>>
> >>>
> > Thanks
> > Barry
>
Powered by blists - more mailing lists