[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2c1eb3ba-3f76-4796-b076-307a72cddc76@arm.com>
Date: Mon, 15 Apr 2024 09:47:15 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Lance Yang <ioworker0@...il.com>, akpm@...ux-foundation.org
Cc: zokeefe@...gle.com, 21cnbao@...il.com, shy828301@...il.com,
david@...hat.com, mhocko@...e.com, fengwei.yin@...el.com,
xiehuan09@...il.com, wangkefeng.wang@...wei.com, songmuchun@...edance.com,
peterx@...hat.com, minchan@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 1/2] mm/madvise: optimize lazyfreeing with mTHP in
madvise_free
On 13/04/2024 01:22, Lance Yang wrote:
> This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
> splitting if the large folio is fully mapped within the target range.
>
> If a large folio is locked or shared, or if we fail to split it, we just
> leave it in place and advance to the next PTE in the range. But note that
> the behavior is changed; previously, any failure of this sort would cause
> the entire operation to give up. As large folios become more common,
> sticking to the old way could result in wasted opportunities.
>
> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
> the same size results in the following runtimes for madvise(MADV_FREE) in
> seconds (shorter is better):
>
> Folio Size | Old | New | Change
> ------------------------------------------
> 4KiB | 0.590251 | 0.590259 | 0%
> 16KiB | 2.990447 | 0.185655 | -94%
> 32KiB | 2.547831 | 0.104870 | -95%
> 64KiB | 2.457796 | 0.052812 | -97%
> 128KiB | 2.281034 | 0.032777 | -99%
> 256KiB | 2.230387 | 0.017496 | -99%
> 512KiB | 2.189106 | 0.010781 | -99%
> 1024KiB | 2.183949 | 0.007753 | -99%
> 2048KiB | 0.002799 | 0.002804 | 0%
>
> [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com
> [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com
>
> Signed-off-by: Lance Yang <ioworker0@...il.com>
This is looking close IMHO. Just one bug and suggestion below.
> ---
> include/linux/mm_types.h | 9 +++
> include/linux/pgtable.h | 42 +++++++++++
> mm/internal.h | 12 +++-
> mm/madvise.c | 147 ++++++++++++++++++++++-----------------
> mm/memory.c | 4 +-
> 5 files changed, 147 insertions(+), 67 deletions(-)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index c432add95913..3c224e25f473 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -1367,6 +1367,15 @@ enum fault_flag {
>
> typedef unsigned int __bitwise zap_flags_t;
>
> +/* Flags for clear_young_dirty_ptes(). */
> +typedef int __bitwise cydp_t;
> +
> +/* make PTEs after pte_mkold() */
> +#define CYDP_CLEAR_YOUNG ((__force cydp_t)BIT(0))
> +
> +/* make PTEs after pte_mkclean() */
> +#define CYDP_CLEAR_DIRTY ((__force cydp_t)BIT(1))
> +
> /*
> * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
> * other. Here is what they mean, and how to use them:
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index e2f45e22a6d1..d7958243f099 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -489,6 +489,48 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
> }
> #endif
>
> +#ifndef clear_young_dirty_ptes
> +/**
> + * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the
> + * same folio as old/clean.
> + * @mm: Address space the pages are mapped into.
> + * @addr: Address the first page is mapped at.
> + * @ptep: Page table pointer for the first entry.
> + * @nr: Number of entries to mark old/clean.
> + * @flags: Flags to modify the PTE batch semantics.
> + *
> + * May be overridden by the architecture; otherwise, implemented by
> + * get_and_clear/modify/set for each pte in the range.
> + *
> + * Note that PTE bits in the PTE range besides the PFN can differ. For example,
> + * some PTEs might be write-protected.
> + *
> + * Context: The caller holds the page table lock. The PTEs map consecutive
> + * pages that belong to the same folio. The PTEs are all in the same PMD.
> + */
> +static inline void clear_young_dirty_ptes(struct mm_struct *mm,
My suggestion to introduce clear_young_dirty_ptes() was so that we could
*remove* mkold_ptes(). So I think it would be good to split that to a separate
preparatory patch, where you change all the callers to call the new function and
remove mkold_ptes().
Additionally since many arches already override ptep_test_and_clear_young() (and
that's what the default mkold_ptes() did, you might want to call that in the
below loop if *only* CYDP_CLEAR_YOUNG is set to avoid the possibility of any
regression. I know I only made this change for the swap-out series so what you
have done may well be ok in practice - and certainly cleaner. It would be good
to hear others' opinions.
Note the existing mkold_ptes() takes a vma instead of mm (because that's what
ptep_test_and_clear_young() takes). So suggest passing vma. You can get mm from
vma->mm.
> + unsigned long addr, pte_t *ptep,
> + unsigned int nr, cydp_t flags)
> +{
> + pte_t pte;
> +
> + for (;;) {
Suggestion:
if (flags == CYDP_CLEAR_YOUNG) {
ptep_test_and_clear_young(vma, addr, ptep);
else {
> + pte = ptep_get_and_clear(mm, addr, ptep);
> +
> + if (flags | CYDP_CLEAR_YOUNG)
bug: this needs to be bitwise and (&). Currently it will always evaluate to
true. Same for next one.
> + pte = pte_mkold(pte);
> + if (flags | CYDP_CLEAR_DIRTY)
> + pte = pte_mkclean(pte);
> +
> + set_pte_at(mm, addr, ptep, pte);
}
> + if (--nr == 0)
> + break;
> + ptep++;
> + addr += PAGE_SIZE;
> + }
> +}
> +#endif
> +
> static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> pte_t *ptep)
> {
> diff --git a/mm/internal.h b/mm/internal.h
> index 3c0f3e3f9d99..ab8fcdeaf6eb 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -134,6 +134,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
> * first one is writable.
> * @any_young: Optional pointer to indicate whether any entry except the
> * first one is young.
> + * @any_dirty: Optional pointer to indicate whether any entry except the
> + * first one is dirty.
> *
> * Detect a PTE batch: consecutive (present) PTEs that map consecutive
> * pages of the same large folio.
> @@ -149,18 +151,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
> */
> static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
> - bool *any_writable, bool *any_young)
> + bool *any_writable, bool *any_young, bool *any_dirty)
> {
> unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
> const pte_t *end_ptep = start_ptep + max_nr;
> pte_t expected_pte, *ptep;
> - bool writable, young;
> + bool writable, young, dirty;
> int nr;
>
> if (any_writable)
> *any_writable = false;
> if (any_young)
> *any_young = false;
> + if (any_dirty)
> + *any_dirty = false;
>
> VM_WARN_ON_FOLIO(!pte_present(pte), folio);
> VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
> @@ -176,6 +180,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> writable = !!pte_write(pte);
> if (any_young)
> young = !!pte_young(pte);
> + if (any_dirty)
> + dirty = !!pte_dirty(pte);
> pte = __pte_batch_clear_ignored(pte, flags);
>
> if (!pte_same(pte, expected_pte))
> @@ -193,6 +199,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> *any_writable |= writable;
> if (any_young)
> *any_young |= young;
> + if (any_dirty)
> + *any_dirty |= dirty;
>
> nr = pte_batch_hint(ptep, pte);
> expected_pte = pte_advance_pfn(expected_pte, nr);
> diff --git a/mm/madvise.c b/mm/madvise.c
> index d34ca6983227..b4103e2df346 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
> file_permission(vma->vm_file, MAY_WRITE) == 0;
> }
>
> +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end,
> + struct folio *folio, pte_t *ptep,
> + pte_t pte, bool *any_young,
> + bool *any_dirty)
> +{
> + int max_nr = (end - addr) / PAGE_SIZE;
> + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> +
> + return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL,
> + any_young, any_dirty);
> +}
> +
> +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd,
> + unsigned long addr,
> + struct folio *folio, pte_t **pte,
> + spinlock_t **ptl)
> +{
> + int err;
> +
> + if (!folio_trylock(folio))
> + return false;
> +
> + folio_get(folio);
> + pte_unmap_unlock(*pte, *ptl);
> + err = split_folio(folio);
> + folio_unlock(folio);
> + folio_put(folio);
> +
> + *pte = pte_offset_map_lock(mm, pmd, addr, ptl);
> +
> + return err == 0;
> +}
> +
> static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> unsigned long addr, unsigned long end,
> struct mm_walk *walk)
> @@ -456,41 +489,30 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> * next pte in the range.
> */
> if (folio_test_large(folio)) {
> - const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
> - FPB_IGNORE_SOFT_DIRTY;
> - int max_nr = (end - addr) / PAGE_SIZE;
> bool any_young;
>
> - nr = folio_pte_batch(folio, addr, pte, ptent, max_nr,
> - fpb_flags, NULL, &any_young);
> - if (any_young)
> - ptent = pte_mkyoung(ptent);
> + nr = madvise_folio_pte_batch(addr, end, folio, pte,
> + ptent, &any_young, NULL);
>
> if (nr < folio_nr_pages(folio)) {
> - int err;
> -
> if (folio_likely_mapped_shared(folio))
> continue;
> if (pageout_anon_only_filter && !folio_test_anon(folio))
> continue;
> - if (!folio_trylock(folio))
> - continue;
> - folio_get(folio);
> +
> arch_leave_lazy_mmu_mode();
> - pte_unmap_unlock(start_pte, ptl);
> - start_pte = NULL;
> - err = split_folio(folio);
> - folio_unlock(folio);
> - folio_put(folio);
> - start_pte = pte =
> - pte_offset_map_lock(mm, pmd, addr, &ptl);
> + if (madvise_pte_split_folio(mm, pmd, addr,
> + folio, &start_pte, &ptl))
> + nr = 0;
> if (!start_pte)
> break;
> + pte = start_pte;
> arch_enter_lazy_mmu_mode();
> - if (!err)
> - nr = 0;
> continue;
> }
> +
> + if (any_young)
> + ptent = pte_mkyoung(ptent);
> }
>
> /*
> @@ -507,7 +529,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> continue;
>
> if (!pageout && pte_young(ptent)) {
> - mkold_ptes(vma, addr, pte, nr);
> + clear_young_dirty_ptes(mm, addr, pte, nr,
> + CYDP_CLEAR_YOUNG);
> tlb_remove_tlb_entries(tlb, pte, nr, addr);
> }
>
> @@ -687,44 +710,51 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> continue;
>
> /*
> - * If pmd isn't transhuge but the folio is large and
> - * is owned by only this process, split it and
> - * deactivate all pages.
> + * If we encounter a large folio, only split it if it is not
> + * fully mapped within the range we are operating on. Otherwise
> + * leave it as is so that it can be marked as lazyfree. If we
> + * fail to split a folio, leave it in place and advance to the
> + * next pte in the range.
> */
> if (folio_test_large(folio)) {
> - int err;
> + bool any_young, any_dirty;
>
> - if (folio_likely_mapped_shared(folio))
> - break;
> - if (!folio_trylock(folio))
> - break;
> - folio_get(folio);
> - arch_leave_lazy_mmu_mode();
> - pte_unmap_unlock(start_pte, ptl);
> - start_pte = NULL;
> - err = split_folio(folio);
> - folio_unlock(folio);
> - folio_put(folio);
> - if (err)
> - break;
> - start_pte = pte =
> - pte_offset_map_lock(mm, pmd, addr, &ptl);
> - if (!start_pte)
> - break;
> - arch_enter_lazy_mmu_mode();
> - pte--;
> - addr -= PAGE_SIZE;
> - continue;
> + nr = madvise_folio_pte_batch(addr, end, folio, pte,
> + ptent, &any_young, &any_dirty);
> +
> + if (nr < folio_nr_pages(folio)) {
> + if (folio_likely_mapped_shared(folio))
> + continue;
> +
> + arch_leave_lazy_mmu_mode();
> + if (madvise_pte_split_folio(mm, pmd, addr,
> + folio, &start_pte, &ptl))
> + nr = 0;
> + if (!start_pte)
> + break;
> + pte = start_pte;
> + arch_enter_lazy_mmu_mode();
> + continue;
> + }
> +
> + if (any_young)
> + ptent = pte_mkyoung(ptent);
> + if (any_dirty)
> + ptent = pte_mkdirty(ptent);
> }
>
> + if (folio_mapcount(folio) != folio_nr_pages(folio))
> + continue;
> +
> if (folio_test_swapcache(folio) || folio_test_dirty(folio)) {
> if (!folio_trylock(folio))
> continue;
> /*
> - * If folio is shared with others, we mustn't clear
> - * the folio's dirty flag.
> + * If we have a large folio at this point, we know it is
> + * fully mapped so if its mapcount is the same as its
> + * number of pages, it must be exclusive.
> */
> - if (folio_mapcount(folio) != 1) {
> + if (folio_mapcount(folio) != folio_nr_pages(folio)) {
> folio_unlock(folio);
> continue;
> }
> @@ -740,19 +770,10 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> }
>
> if (pte_young(ptent) || pte_dirty(ptent)) {
> - /*
> - * Some of architecture(ex, PPC) don't update TLB
> - * with set_pte_at and tlb_remove_tlb_entry so for
> - * the portability, remap the pte with old|clean
> - * after pte clearing.
> - */
> - ptent = ptep_get_and_clear_full(mm, addr, pte,
> - tlb->fullmm);
> -
> - ptent = pte_mkold(ptent);
> - ptent = pte_mkclean(ptent);
> - set_pte_at(mm, addr, pte, ptent);
> - tlb_remove_tlb_entry(tlb, pte, addr);
> + clear_young_dirty_ptes(mm, addr, pte, nr,
> + CYDP_CLEAR_YOUNG |
> + CYDP_CLEAR_DIRTY);
> + tlb_remove_tlb_entries(tlb, pte, nr, addr);
> }
> folio_mark_lazyfree(folio);
> }
> diff --git a/mm/memory.c b/mm/memory.c
> index 76157b32faa8..b6fa5146b260 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
> flags |= FPB_IGNORE_SOFT_DIRTY;
>
> nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags,
> - &any_writable, NULL);
> + &any_writable, NULL, NULL);
> folio_ref_add(folio, nr);
> if (folio_test_anon(folio)) {
> if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
> @@ -1558,7 +1558,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
> */
> if (unlikely(folio_test_large(folio) && max_nr != 1)) {
> nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags,
> - NULL, NULL);
> + NULL, NULL, NULL);
>
> zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr,
> addr, details, rss, force_flush,
Powered by blists - more mailing lists