[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOUHufZeWWs8f4-BokLBgy_oSbT-pfjFpJFNZ+tW0qW9RifX0A@mail.gmail.com>
Date: Wed, 23 Oct 2024 10:56:43 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Usama Arif <usamaarif642@...il.com>
Cc: Zi Yan <ziy@...dia.com>, akpm@...ux-foundation.org, linux-mm@...ck.org,
hannes@...xchg.org, riel@...riel.com, shakeel.butt@...ux.dev,
roman.gushchin@...ux.dev, david@...hat.com, npache@...hat.com,
baohua@...nel.org, ryan.roberts@....com, rppt@...nel.org, willy@...radead.org,
cerasuolodomenico@...il.com, ryncsn@...il.com, corbet@....net,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org, kernel-team@...a.com,
Shuang Zhai <zhais@...gle.com>
Subject: Re: [PATCH v5 2/6] mm: remap unused subpages to shared zeropage when
splitting isolated thp
On Wed, Oct 23, 2024 at 10:51 AM Usama Arif <usamaarif642@...il.com> wrote:
>
> On 23/10/2024 17:21, Zi Yan wrote:
> > On 30 Aug 2024, at 6:03, Usama Arif wrote:
> >
> >> From: Yu Zhao <yuzhao@...gle.com>
> >>
> >> Here being unused means containing only zeros and inaccessible to
> >> userspace. When splitting an isolated thp under reclaim or migration,
> >> the unused subpages can be mapped to the shared zeropage, hence saving
> >> memory. This is particularly helpful when the internal
> >> fragmentation of a thp is high, i.e. it has many untouched subpages.
> >>
> >> This is also a prerequisite for THP low utilization shrinker which will
> >> be introduced in later patches, where underutilized THPs are split, and
> >> the zero-filled pages are freed saving memory.
> >>
> >> Signed-off-by: Yu Zhao <yuzhao@...gle.com>
> >> Tested-by: Shuang Zhai <zhais@...gle.com>
> >> Signed-off-by: Usama Arif <usamaarif642@...il.com>
> >> ---
> >> include/linux/rmap.h | 7 ++++-
> >> mm/huge_memory.c | 8 ++---
> >> mm/migrate.c | 72 ++++++++++++++++++++++++++++++++++++++------
> >> mm/migrate_device.c | 4 +--
> >> 4 files changed, 75 insertions(+), 16 deletions(-)
> >>
> >> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> >> index 91b5935e8485..d5e93e44322e 100644
> >> --- a/include/linux/rmap.h
> >> +++ b/include/linux/rmap.h
> >> @@ -745,7 +745,12 @@ int folio_mkclean(struct folio *);
> >> int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
> >> struct vm_area_struct *vma);
> >>
> >> -void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked);
> >> +enum rmp_flags {
> >> + RMP_LOCKED = 1 << 0,
> >> + RMP_USE_SHARED_ZEROPAGE = 1 << 1,
> >> +};
> >> +
> >> +void remove_migration_ptes(struct folio *src, struct folio *dst, int flags);
> >>
> >> /*
> >> * rmap_walk_control: To control rmap traversing for specific needs
> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >> index 0c48806ccb9a..af60684e7c70 100644
> >> --- a/mm/huge_memory.c
> >> +++ b/mm/huge_memory.c
> >> @@ -3020,7 +3020,7 @@ bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
> >> return false;
> >> }
> >>
> >> -static void remap_page(struct folio *folio, unsigned long nr)
> >> +static void remap_page(struct folio *folio, unsigned long nr, int flags)
> >> {
> >> int i = 0;
> >>
> >> @@ -3028,7 +3028,7 @@ static void remap_page(struct folio *folio, unsigned long nr)
> >> if (!folio_test_anon(folio))
> >> return;
> >> for (;;) {
> >> - remove_migration_ptes(folio, folio, true);
> >> + remove_migration_ptes(folio, folio, RMP_LOCKED | flags);
> >> i += folio_nr_pages(folio);
> >> if (i >= nr)
> >> break;
> >> @@ -3240,7 +3240,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> >>
> >> if (nr_dropped)
> >> shmem_uncharge(folio->mapping->host, nr_dropped);
> >> - remap_page(folio, nr);
> >> + remap_page(folio, nr, PageAnon(head) ? RMP_USE_SHARED_ZEROPAGE : 0);
> >>
> >> /*
> >> * set page to its compound_head when split to non order-0 pages, so
> >> @@ -3542,7 +3542,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> >> if (mapping)
> >> xas_unlock(&xas);
> >> local_irq_enable();
> >> - remap_page(folio, folio_nr_pages(folio));
> >> + remap_page(folio, folio_nr_pages(folio), 0);
> >> ret = -EAGAIN;
> >> }
> >>
> >> diff --git a/mm/migrate.c b/mm/migrate.c
> >> index 6f9c62c746be..d039863e014b 100644
> >> --- a/mm/migrate.c
> >> +++ b/mm/migrate.c
> >> @@ -204,13 +204,57 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
> >> return true;
> >> }
> >>
> >> +static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
> >> + struct folio *folio,
> >> + unsigned long idx)
> >> +{
> >> + struct page *page = folio_page(folio, idx);
> >> + bool contains_data;
> >> + pte_t newpte;
> >> + void *addr;
> >> +
> >> + VM_BUG_ON_PAGE(PageCompound(page), page);
> >
> > This should be:
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index e950fd62607f..7ffdbe078aa7 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -206,7 +206,8 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
> > pte_t newpte;
> > void *addr;
> >
> > - VM_BUG_ON_PAGE(PageCompound(page), page);
> > + if (PageCompound(page))
> > + return false;
> > VM_BUG_ON_PAGE(!PageAnon(page), page);
> > VM_BUG_ON_PAGE(!PageLocked(page), page);
> > VM_BUG_ON_PAGE(pte_present(*pvmw->pte), page);
> >
> > Otherwise, splitting anonymous large folios to non order-0 ones just
> > triggers this BUG_ON.
> >
>
> That makes sense, would you like to send the fix?
>
> Adding Yu Zhao to "To" incase he has any objections.
LGTM. Thanks!
Powered by blists - more mailing lists