[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4wvEQA91M+RBy0hp5B3qpb5q4rtZT+qsWcVcXgstL+NKw@mail.gmail.com>
Date: Mon, 26 Feb 2024 19:39:51 +1300
From: Barry Song <21cnbao@...il.com>
To: Chris Li <chrisl@...nel.org>
Cc: ryan.roberts@....com, akpm@...ux-foundation.org, david@...hat.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, mhocko@...e.com,
shy828301@...il.com, wangkefeng.wang@...wei.com, willy@...radead.org,
xiang@...nel.org, ying.huang@...el.com, yuzhao@...gle.com, surenb@...gle.com,
steven.price@....com, Chuanhua Han <hanchuanhua@...o.com>,
Barry Song <v-songbaohua@...o.com>
Subject: Re: [PATCH RFC 6/6] mm: madvise: don't split mTHP for MADV_PAGEOUT
On Mon, Jan 29, 2024 at 3:15 PM Chris Li <chrisl@...nel.org> wrote:
>
> On Thu, Jan 18, 2024 at 3:12 AM Barry Song <21cnbao@...il.com> wrote:
> >
> > From: Chuanhua Han <hanchuanhua@...o.com>
> >
> > MADV_PAGEOUT and MADV_FREE are common cases in Android. Ryan's patchset has
> > supported swapping large folios out as a whole for vmscan case. This patch
> > extends the feature to madvise.
> >
> > If madvised range covers the whole large folio, we don't split it. Otherwise,
> > we still need to split it.
> >
> > This patch doesn't depend on ARM64's CONT-PTE, alternatively, it defines one
> > helper named pte_range_cont_mapped() to check if all PTEs are contiguously
> > mapped to a large folio.
> >
> > Signed-off-by: Chuanhua Han <hanchuanhua@...o.com>
> > Co-developed-by: Barry Song <v-songbaohua@...o.com>
> > Signed-off-by: Barry Song <v-songbaohua@...o.com>
> > ---
> > include/asm-generic/tlb.h | 10 +++++++
> > include/linux/pgtable.h | 60 +++++++++++++++++++++++++++++++++++++++
> > mm/madvise.c | 48 +++++++++++++++++++++++++++++++
> > 3 files changed, 118 insertions(+)
> >
> > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> > index 129a3a759976..f894e22da5d6 100644
> > --- a/include/asm-generic/tlb.h
> > +++ b/include/asm-generic/tlb.h
> > @@ -608,6 +608,16 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb,
> > __tlb_remove_tlb_entry(tlb, ptep, address); \
> > } while (0)
> >
> > +#define tlb_remove_nr_tlb_entry(tlb, ptep, address, nr) \
> > + do { \
> > + int i; \
> > + tlb_flush_pte_range(tlb, address, \
> > + PAGE_SIZE * nr); \
> > + for (i = 0; i < nr; i++) \
> > + __tlb_remove_tlb_entry(tlb, ptep + i, \
> > + address + i * PAGE_SIZE); \
> > + } while (0)
> > +
> > #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \
> > do { \
> > unsigned long _sz = huge_page_size(h); \
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 37fe83b0c358..da0c1cf447e3 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -320,6 +320,42 @@ static inline pgd_t pgdp_get(pgd_t *pgdp)
> > }
> > #endif
> >
> > +#ifndef pte_range_cont_mapped
> > +static inline bool pte_range_cont_mapped(unsigned long start_pfn,
> > + pte_t *start_pte,
> > + unsigned long start_addr,
> > + int nr)
> > +{
> > + int i;
> > + pte_t pte_val;
> > +
> > + for (i = 0; i < nr; i++) {
> > + pte_val = ptep_get(start_pte + i);
> > +
> > + if (pte_none(pte_val))
> > + return false;
>
> Hmm, the following check pte_pfn == start_pfn + i should have covered
> the pte none case?
>
> I think the pte_none means it can't have a valid pfn. So this check
> can be skipped?
yes. check pte_pfn == start_pfn + i should have covered the pte none
case. but leaving pte_none there seems to make the code more
readable. i guess we need to check pte_present() too, a small chance is
swp_offset can equal pte_pfn after some shifting? in case, a PTE
within the large folio range has been a swap entry?
I am still thinking about if we have some cheaper way to check if a folio
is still entirely mapped. maybe sth like if
(list_empty(&folio->_deferred_list))?
>
> > +
> > + if (pte_pfn(pte_val) != (start_pfn + i))
> > + return false;
> > + }
> > +
> > + return true;
> > +}
> > +#endif
> > +
> > +#ifndef pte_range_young
> > +static inline bool pte_range_young(pte_t *start_pte, int nr)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < nr; i++)
> > + if (pte_young(ptep_get(start_pte + i)))
> > + return true;
> > +
> > + return false;
> > +}
> > +#endif
> > +
> > #ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
> > static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
> > unsigned long address,
> > @@ -580,6 +616,23 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
> > }
> > #endif
> >
> > +#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_RANGE_FULL
> > +static inline pte_t ptep_get_and_clear_range_full(struct mm_struct *mm,
> > + unsigned long start_addr,
> > + pte_t *start_pte,
> > + int nr, int full)
> > +{
> > + int i;
> > + pte_t pte;
> > +
> > + pte = ptep_get_and_clear_full(mm, start_addr, start_pte, full);
> > +
> > + for (i = 1; i < nr; i++)
> > + ptep_get_and_clear_full(mm, start_addr + i * PAGE_SIZE,
> > + start_pte + i, full);
> > +
> > + return pte;
> > +}
> >
> > /*
> > * If two threads concurrently fault at the same page, the thread that
> > @@ -995,6 +1048,13 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
> > })
> > #endif
> >
> > +#ifndef pte_nr_addr_end
> > +#define pte_nr_addr_end(addr, size, end) \
> > +({ unsigned long __boundary = ((addr) + size) & (~(size - 1)); \
> > + (__boundary - 1 < (end) - 1)? __boundary: (end); \
> > +})
> > +#endif
> > +
> > /*
> > * When walking page tables, we usually want to skip any p?d_none entries;
> > * and any p?d_bad entries - reporting the error before resetting to none.
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index 912155a94ed5..262460ac4b2e 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -452,6 +452,54 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> > if (folio_test_large(folio)) {
> > int err;
> >
> > + if (!folio_test_pmd_mappable(folio)) {
>
> This session of code indent into the right too much.
> You can do:
>
> if (folio_test_pmd_mappable(folio))
> goto split;
>
> to make the code flatter.
I guess we don't need "if (!folio_test_pmd_mappable(folio))" at all
as the pmd case has been
handled at the first beginning of madvise_cold_or_pageout_pte_range().
>
> > + int nr_pages = folio_nr_pages(folio);
> > + unsigned long folio_size = PAGE_SIZE * nr_pages;
> > + unsigned long start_addr = ALIGN_DOWN(addr, nr_pages * PAGE_SIZE);;
> > + unsigned long start_pfn = page_to_pfn(folio_page(folio, 0));
> > + pte_t *start_pte = pte - (addr - start_addr) / PAGE_SIZE;
> > + unsigned long next = pte_nr_addr_end(addr, folio_size, end);
> > +
> > + if (!pte_range_cont_mapped(start_pfn, start_pte, start_addr, nr_pages))
> > + goto split;
> > +
> > + if (next - addr != folio_size) {
>
> Nitpick: One line statement does not need {
>
> > + goto split;
> > + } else {
>
> When the previous if statement already "goto split", there is no need
> for the else. You can save one level of indentation.
right!
>
>
>
> > + /* Do not interfere with other mappings of this page */
> > + if (folio_estimated_sharers(folio) != 1)
> > + goto skip;
> > +
> > + VM_BUG_ON(addr != start_addr || pte != start_pte);
> > +
> > + if (pte_range_young(start_pte, nr_pages)) {
> > + ptent = ptep_get_and_clear_range_full(mm, start_addr, start_pte,
> > + nr_pages, tlb->fullmm);
> > + ptent = pte_mkold(ptent);
> > +
> > + set_ptes(mm, start_addr, start_pte, ptent, nr_pages);
> > + tlb_remove_nr_tlb_entry(tlb, start_pte, start_addr, nr_pages);
> > + }
> > +
> > + folio_clear_referenced(folio);
> > + folio_test_clear_young(folio);
> > + if (pageout) {
> > + if (folio_isolate_lru(folio)) {
> > + if (folio_test_unevictable(folio))
> > + folio_putback_lru(folio);
> > + else
> > + list_add(&folio->lru, &folio_list);
> > + }
> > + } else
> > + folio_deactivate(folio);
>
> I notice this section is very similar to the earlier statements inside
> the same function.
> "if (pmd_trans_huge(*pmd)) {"
>
> Wondering if there is some way to unify the two a bit somehow.
we have duplicated the code three times - pmd, pte-mapped large, normal folio.
I am quite sure if we can extract a common function.
>
> Also notice if you test the else condition first,
>
> If (!pageout) {
> folio_deactivate(folio);
> goto skip;
> }
>
> You can save one level of indentation.
> Not your fault, I notice the section inside (pmd_trans_huge(*pmd))
> does exactly the same thing.
>
can address this issue once we have a common func.
> Chris
>
>
> > + }
> > +skip:
> > + pte += (next - PAGE_SIZE - (addr & PAGE_MASK))/PAGE_SIZE;
> > + addr = next - PAGE_SIZE;
> > + continue;
> > +
> > + }
> > +split:
> > if (folio_estimated_sharers(folio) != 1)
> > break;
> > if (pageout_anon_only_filter && !folio_test_anon(folio))
> > --
> > 2.34.1
> >
> >
Thanks
Barry
Powered by blists - more mailing lists