[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231018015824.GA2942027@ik1-406-35019.vs.sakura.ne.jp>
Date: Wed, 18 Oct 2023 10:58:24 +0900
From: Naoya Horiguchi <naoya.horiguchi@...ux.dev>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Muchun Song <songmuchun@...edance.com>,
Joao Martins <joao.m.martins@...cle.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Miaohe Lin <linmiaohe@...wei.com>,
David Rientjes <rientjes@...gle.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Barry Song <song.bao.hua@...ilicon.com>,
Michal Hocko <mhocko@...e.com>,
Matthew Wilcox <willy@...radead.org>,
Xiongchun Duan <duanxiongchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v2 01/11] hugetlb: set hugetlb page flag before
optimizing vmemmap
On Mon, Oct 16, 2023 at 08:21:40PM -0700, Mike Kravetz wrote:
> On 10/13/23 21:58, Naoya Horiguchi wrote:
> > On Tue, Sep 05, 2023 at 02:44:00PM -0700, Mike Kravetz wrote:
> > >
> > > Fixes: f41f2ed43ca5 ("mm: hugetlb: free the vmemmap pages associated with each HugeTLB page")
> > > Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> >
> > I saw that VM_WARN_ON_ONCE() in hugetlb_vmemmap_restore is triggered when
> > memory_failure() is called on a free hugetlb page with vmemmap optimization
> > disabled (the warning is not triggered if vmemmap optimization is enabled).
> > I think that we need check folio_test_hugetlb() before dissolve_free_huge_page()
> > calls hugetlb_vmemmap_restore_folio().
> >
> > Could you consider adding some diff like below?
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -2312,15 +2312,16 @@ int dissolve_free_huge_page(struct page *page)
> > * Attempt to allocate vmemmmap here so that we can take
> > * appropriate action on failure.
> > */
> > - rc = hugetlb_vmemmap_restore_folio(h, folio);
> > - if (!rc) {
> > - update_and_free_hugetlb_folio(h, folio, false);
> > - } else {
> > - spin_lock_irq(&hugetlb_lock);
> > - add_hugetlb_folio(h, folio, false);
> > - h->max_huge_pages++;
> > - spin_unlock_irq(&hugetlb_lock);
> > + if (folio_test_hugetlb(folio)) {
> > + rc = hugetlb_vmemmap_restore_folio(h, folio);
> > + if (rc) {
> > + spin_lock_irq(&hugetlb_lock);
> > + add_hugetlb_folio(h, folio, false);
> > + h->max_huge_pages++;
> > + goto out;
> > + }
> > }
> > + update_and_free_hugetlb_folio(h, folio, false);
> >
> > return rc;
> > }
> >
>
> Hi Naoya,
>
> I believe we need to set 'rc = 0' in the !folio_test_hugetlb(). I put
> together the following patch based on mm-stable. Please take a look.
Hi Mike, the patch looks good to me, and I confirmed that my test programs
showed no new problem on latest mm-stable + this patch.
Thank you very much.
Naoya Horiguchi
>
> From f19fbfab324d7d17de4a1e814f95ee655950c58e Mon Sep 17 00:00:00 2001
> From: Mike Kravetz <mike.kravetz@...cle.com>
> Date: Mon, 16 Oct 2023 19:55:49 -0700
> Subject: [PATCH] hugetlb: check for hugetlb folio before vmemmap_restore
>
> In commit d8f5f7e445f0 ("hugetlb: set hugetlb page flag before
> optimizing vmemmap") checks were added to print a warning if
> hugetlb_vmemmap_restore was called on a non-hugetlb page. This
> was mostly due to ordering issues in the hugetlb page set up and
> tear down sequencees. One place missed was the routine
> dissolve_free_huge_page. Naoya Horiguchi noted: "I saw that
> VM_WARN_ON_ONCE() in hugetlb_vmemmap_restore is triggered when
> memory_failure() is called on a free hugetlb page with vmemmap
> optimization disabled (the warning is not triggered if vmemmap
> optimization is enabled). I think that we need check
> folio_test_hugetlb() before dissolve_free_huge_page() calls
> hugetlb_vmemmap_restore_folio()."
>
> Perform the check as suggested by Naoya.
>
> Fixes: d8f5f7e445f0 ("hugetlb: set hugetlb page flag before optimizing vmemmap")
> Suggested-by: Naoya Horiguchi <naoya.horiguchi@...ux.dev>
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
> mm/hugetlb.c | 24 +++++++++++++++---------
> 1 file changed, 15 insertions(+), 9 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 36b40bc9ac25..13736cbb2c19 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2290,17 +2290,23 @@ int dissolve_free_huge_page(struct page *page)
> * need to adjust max_huge_pages if the page is not freed.
> * Attempt to allocate vmemmmap here so that we can take
> * appropriate action on failure.
> + *
> + * The folio_test_hugetlb check here is because
> + * remove_hugetlb_folio will clear hugetlb folio flag for
> + * non-vmemmap optimized hugetlb folios.
> */
> - rc = hugetlb_vmemmap_restore(h, &folio->page);
> - if (!rc) {
> - update_and_free_hugetlb_folio(h, folio, false);
> - } else {
> - spin_lock_irq(&hugetlb_lock);
> - add_hugetlb_folio(h, folio, false);
> - h->max_huge_pages++;
> - spin_unlock_irq(&hugetlb_lock);
> - }
> + if (folio_test_hugetlb(folio)) {
> + rc = hugetlb_vmemmap_restore(h, &folio->page);
> + if (rc) {
> + spin_lock_irq(&hugetlb_lock);
> + add_hugetlb_folio(h, folio, false);
> + h->max_huge_pages++;
> + goto out;
> + }
> + } else
> + rc = 0;
>
> + update_and_free_hugetlb_folio(h, folio, false);
> return rc;
> }
> out:
> --
> 2.41.0
>
Powered by blists - more mailing lists