[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA1CXcDKKyG4QA_M91UqHGBru2GdubgCx2m1PUpFu3ftonS4zw@mail.gmail.com>
Date: Fri, 10 Jan 2025 12:41:54 -0700
From: Nico Pache <npache@...hat.com>
To: Dev Jain <dev.jain@....com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, ryan.roberts@....com,
anshuman.khandual@....com, catalin.marinas@....com, cl@...two.org,
vbabka@...e.cz, mhocko@...e.com, apopple@...dia.com,
dave.hansen@...ux.intel.com, will@...nel.org, baohua@...nel.org, jack@...e.cz,
srivatsa@...il.mit.edu, haowenchao22@...il.com, hughd@...gle.com,
aneesh.kumar@...nel.org, yang@...amperecomputing.com, peterx@...hat.com,
ioworker0@...il.com, wangkefeng.wang@...wei.com, ziy@...dia.com,
jglisse@...gle.com, surenb@...gle.com, vishal.moola@...il.com,
zokeefe@...gle.com, zhengqi.arch@...edance.com, jhubbard@...dia.com,
21cnbao@...il.com, willy@...radead.org, kirill.shutemov@...ux.intel.com,
david@...hat.com, aarcange@...hat.com, raquini@...hat.com,
sunnanyong@...wei.com, usamaarif642@...il.com, audra@...hat.com,
akpm@...ux-foundation.org
Subject: Re: [RFC 06/11] khugepaged: generalize alloc_charge_folio for mTHP support
On Thu, Jan 9, 2025 at 11:24 PM Dev Jain <dev.jain@....com> wrote:
>
>
>
> On 09/01/25 5:01 am, Nico Pache wrote:
> > alloc_charge_folio allocates the new folio for the khugepaged collapse.
> > Generalize the order of the folio allocations to support future mTHP
> > collapsing.
> >
> > No functional changes in this patch.
> >
> > Signed-off-by: Nico Pache <npache@...hat.com>
> > ---
> > mm/khugepaged.c | 8 ++++----
> > 1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index e2e6ca9265ab..6daf3a943a1a 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1070,14 +1070,14 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
> > }
> >
> > static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
> > - struct collapse_control *cc)
> > + struct collapse_control *cc, int order)
> > {
> > gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> > GFP_TRANSHUGE);
> > int node = khugepaged_find_target_node(cc);
> > struct folio *folio;
> >
> > - folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask);
> > + folio = __folio_alloc(gfp, order, node, &cc->alloc_nmask);
> > if (!folio) {
> > *foliop = NULL;
> > count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
> > @@ -1121,7 +1121,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > */
> > mmap_read_unlock(mm);
> >
> > - result = alloc_charge_folio(&folio, mm, cc);
> > + result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> > if (result != SCAN_SUCCEED)
> > goto out_nolock;
> >
> > @@ -1834,7 +1834,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> > VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem);
> > VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
> >
> > - result = alloc_charge_folio(&new_folio, mm, cc);
> > + result = alloc_charge_folio(&new_folio, mm, cc, HPAGE_PMD_ORDER);
> > if (result != SCAN_SUCCEED)
> > goto out;
> >
>
> I guess we will need stat updation like I did in my patch.
Yeah stats were on my TODO list, as well as cleaning up some of the
tracing. Those will be done before the PATCH posting.
>
Powered by blists - more mailing lists