[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAA1CXcBskENLPGB3zCNu0UpPqMMRtSiQuAdLPDPsEZt6ne6JnA@mail.gmail.com>
Date: Tue, 26 Aug 2025 07:46:14 -0600
From: Nico Pache <npache@...hat.com>
To: Wei Yang <richard.weiyang@...il.com>
Cc: linux-mm@...ck.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
david@...hat.com, ziy@...dia.com, baolin.wang@...ux.alibaba.com,
lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, ryan.roberts@....com,
dev.jain@....com, corbet@....net, rostedt@...dmis.org, mhiramat@...nel.org,
mathieu.desnoyers@...icios.com, akpm@...ux-foundation.org, baohua@...nel.org,
willy@...radead.org, peterx@...hat.com, wangkefeng.wang@...wei.com,
usamaarif642@...il.com, sunnanyong@...wei.com, vishal.moola@...il.com,
thomas.hellstrom@...ux.intel.com, yang@...amperecomputing.com,
kirill.shutemov@...ux.intel.com, aarcange@...hat.com, raquini@...hat.com,
anshuman.khandual@....com, catalin.marinas@....com, tiwai@...e.de,
will@...nel.org, dave.hansen@...ux.intel.com, jack@...e.cz, cl@...two.org,
jglisse@...gle.com, surenb@...gle.com, zokeefe@...gle.com, hannes@...xchg.org,
rientjes@...gle.com, mhocko@...e.com, rdunlap@...radead.org, hughd@...gle.com
Subject: Re: [PATCH v10 03/13] khugepaged: generalize hugepage_vma_revalidate
for mTHP support
On Sat, Aug 23, 2025 at 7:37 PM Wei Yang <richard.weiyang@...il.com> wrote:
>
> Hi, Nico
>
> Some nit below.
>
> On Tue, Aug 19, 2025 at 07:41:55AM -0600, Nico Pache wrote:
> >For khugepaged to support different mTHP orders, we must generalize this
> >to check if the PMD is not shared by another VMA and the order is enabled.
> >
> >To ensure madvise_collapse can support working on mTHP orders without the
> >PMD order enabled, we need to convert hugepage_vma_revalidate to take a
> >bitmap of orders.
> >
> >No functional change in this patch.
> >
> >Reviewed-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> >Acked-by: David Hildenbrand <david@...hat.com>
> >Co-developed-by: Dev Jain <dev.jain@....com>
> >Signed-off-by: Dev Jain <dev.jain@....com>
> >Signed-off-by: Nico Pache <npache@...hat.com>
> >---
> > mm/khugepaged.c | 13 ++++++++-----
> > 1 file changed, 8 insertions(+), 5 deletions(-)
> >
> >diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> >index b7b98aebb670..2d192ec961d2 100644
> >--- a/mm/khugepaged.c
> >+++ b/mm/khugepaged.c
>
> There is a comment above this function, which says "revalidate vma before
> taking mmap_lock".
>
> I am afraid it is "after taking mmap_lock"? or "after taking mmap_lock again"?
Good catch, never noticed that. I updated the comment!
>
> >@@ -917,7 +917,7 @@ static int collapse_find_target_node(struct collapse_control *cc)
> > static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> > bool expect_anon,
> > struct vm_area_struct **vmap,
> >- struct collapse_control *cc)
> >+ struct collapse_control *cc, unsigned long orders)
> > {
> > struct vm_area_struct *vma;
> > enum tva_type type = cc->is_khugepaged ? TVA_KHUGEPAGED :
> >@@ -930,9 +930,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> > if (!vma)
> > return SCAN_VMA_NULL;
> >
> >+ /* Always check the PMD order to insure its not shared by another VMA */
> > if (!thp_vma_suitable_order(vma, address, PMD_ORDER))
> > return SCAN_ADDRESS_RANGE;
> >- if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER))
> >+ if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders))
> > return SCAN_VMA_CHECK;
> > /*
> > * Anon VMA expected, the address may be unmapped then
>
> Below is a comment, "thp_vma_allowable_order may return".
>
> Since you use thp_vma_allowable_orders, maybe we need to change the comment
> too.
Ack! Thanks for the review!
>
> >@@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > goto out_nolock;
> >
> > mmap_read_lock(mm);
> >- result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> >+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc,
> >+ BIT(HPAGE_PMD_ORDER));
> > if (result != SCAN_SUCCEED) {
> > mmap_read_unlock(mm);
> > goto out_nolock;
> >@@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > * mmap_lock.
> > */
> > mmap_write_lock(mm);
> >- result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> >+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc,
> >+ BIT(HPAGE_PMD_ORDER));
> > if (result != SCAN_SUCCEED)
> > goto out_up_write;
> > /* check if the pmd is still valid */
> >@@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
> > mmap_read_lock(mm);
> > mmap_locked = true;
> > result = hugepage_vma_revalidate(mm, addr, false, &vma,
> >- cc);
> >+ cc, BIT(HPAGE_PMD_ORDER));
> > if (result != SCAN_SUCCEED) {
> > last_fail = result;
> > goto out_nolock;
> >--
> >2.50.1
> >
>
> --
> Wei Yang
> Help you, Help me
>
Powered by blists - more mailing lists