[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACw3F501ON2pK+k9g5yC4ShtLbYQXQUVbNcpJeW7UVDUUsaUUQ@mail.gmail.com>
Date: Fri, 23 Jun 2023 09:40:09 -0700
From: Jiaqi Yan <jiaqiyan@...gle.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>,
"songmuchun@...edance.com" <songmuchun@...edance.com>,
"shy828301@...il.com" <shy828301@...il.com>,
"linmiaohe@...wei.com" <linmiaohe@...wei.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"duenwen@...gle.com" <duenwen@...gle.com>,
"axelrasmussen@...gle.com" <axelrasmussen@...gle.com>,
"jthoughton@...gle.com" <jthoughton@...gle.com>
Subject: Re: [PATCH v1 1/3] mm/hwpoison: find subpage in hugetlb HWPOISON list
On Thu, Jun 22, 2023 at 9:19 PM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>
> On 06/22/23 17:45, Jiaqi Yan wrote:
> > On Tue, Jun 20, 2023 at 3:39 PM Mike Kravetz <mike.kravetz@...cle.com> wrote:
> > >
> > > On 06/20/23 11:05, Mike Kravetz wrote:
> > > > On 06/19/23 17:23, Naoya Horiguchi wrote:
> > > > >
> > > > > Considering this issue as one specific to memory error handling, checking
> > > > > HPG_vmemmap_optimized in __get_huge_page_for_hwpoison() might be helpful to
> > > > > detect the race. Then, an idea like the below diff (not tested) can make
> > > > > try_memory_failure_hugetlb() retry (with retaking hugetlb_lock) to wait
> > > > > for complete the allocation of vmemmap pages.
> > > > >
> > > > > @@ -1938,8 +1938,11 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
> > > > > int ret = 2; /* fallback to normal page handling */
> > > > > bool count_increased = false;
> > > > >
> > > > > - if (!folio_test_hugetlb(folio))
> > > > > + if (!folio_test_hugetlb(folio)) {
> > > > > + if (folio_test_hugetlb_vmemmap_optimized(folio))
> > > > > + ret = -EBUSY;
> > > >
> > > > The hugetlb specific page flags (HPG_vmemmap_optimized here) reside in
> > > > the folio->private field.
> > > >
> > > > In the case where the folio is a non-hugetlb folio, the folio->private field
> > > > could be any arbitrary value. As such, the test for vmemmap_optimized may
> > > > return a false positive. We could end up retrying for an arbitrarily
> > > > long time.
> > > >
> > > > I am looking at how to restructure the code which removes and frees
> > > > hugetlb pages so that folio_test_hugetlb() would remain true until
> > > > vmemmap pages are allocated. The easiest way to do this would introduce
> > > > another hugetlb lock/unlock cycle in the page freeing path. This would
> > > > undo some of the speedups in the series:
> > > > https://lore.kernel.org/all/20210409205254.242291-4-mike.kravetz@oracle.com/T/#m34321fbcbdf8bb35dfe083b05d445e90ecc1efab
> > > >
> > >
> > > Perhaps something like this? Minimal testing.
> >
> > Thanks for putting up a fix, Mike!
> >
> > >
> > > From e709fb4da0b6249973f9bf0540c9da0e4c585fe2 Mon Sep 17 00:00:00 2001
> > > From: Mike Kravetz <mike.kravetz@...cle.com>
> > > Date: Tue, 20 Jun 2023 14:48:39 -0700
> > > Subject: [PATCH] hugetlb: Do not clear hugetlb dtor until allocating vmemmap
> > >
> > > Freeing a hugetlb page and releasing base pages back to the underlying
> > > allocator such as buddy or cma is performed in two steps:
> > > - remove_hugetlb_folio() is called to remove the folio from hugetlb
> > > lists, get a ref on the page and remove hugetlb destructor. This
> > > all must be done under the hugetlb lock. After this call, the page
> > > can be treated as a normal compound page or a collection of base
> > > size pages.
> > > - update_and_free_hugetlb_folio() is called to allocate vmemmap if
> > > needed and the free routine of the underlying allocator is called
> > > on the resulting page. We can not hold the hugetlb lock here.
> > >
> > > One issue with this scheme is that a memory error could occur between
> > > these two steps. In this case, the memory error handling code treats
> > > the old hugetlb page as a normal compound page or collection of base
> > > pages. It will then try to SetPageHWPoison(page) on the page with an
> > > error. If the page with error is a tail page without vmemmap, a write
> > > error will occur when trying to set the flag.
> > >
> > > Address this issue by modifying remove_hugetlb_folio() and
> > > update_and_free_hugetlb_folio() such that the hugetlb destructor is not
> > > cleared until after allocating vmemmap. Since clearing the destructor
> > > required holding the hugetlb lock, the clearing is done in
> > > remove_hugetlb_folio() if the vmemmap is present. This saves a
> > > lock/unlock cycle. Otherwise, destructor is cleared in
> > > update_and_free_hugetlb_folio() after allocating vmemmap.
> > >
> > > Note that this will leave hugetlb pages in a state where they are marked
> > > free (by hugetlb specific page flag) and have a ref count. This is not
> > > a normal state. The only code that would notice is the memory error
> > > code, and it is set up to retry in such a case.
> > >
> > > A subsequent patch will create a routine to do bulk processing of
> > > vmemmap allocation. This will eliminate a lock/unlock cycle for each
> > > hugetlb page in the case where we are freeing a bunch of pages.
> > >
> > > Fixes: ???
> > > Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> > > ---
> > > mm/hugetlb.c | 75 +++++++++++++++++++++++++++++++++++-----------------
> > > 1 file changed, 51 insertions(+), 24 deletions(-)
> > >
> > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > > index d76574425da3..f7f64470aee0 100644
> > > --- a/mm/hugetlb.c
> > > +++ b/mm/hugetlb.c
> > > @@ -1579,9 +1579,37 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio,
> > > unsigned int order) { }
> > > #endif
> > >
> > > +static inline void __clear_hugetlb_destructor(struct hstate *h,
> > > + struct folio *folio)
> > > +{
> > > + lockdep_assert_held(&hugetlb_lock);
> > > +
> > > + /*
> > > + * Very subtle
> > > + *
> > > + * For non-gigantic pages set the destructor to the normal compound
> > > + * page dtor. This is needed in case someone takes an additional
> > > + * temporary ref to the page, and freeing is delayed until they drop
> > > + * their reference.
> > > + *
> > > + * For gigantic pages set the destructor to the null dtor. This
> > > + * destructor will never be called. Before freeing the gigantic
> > > + * page destroy_compound_gigantic_folio will turn the folio into a
> > > + * simple group of pages. After this the destructor does not
> > > + * apply.
> > > + *
> > > + */
> > > + if (hstate_is_gigantic(h))
> > > + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR);
> > > + else
> > > + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR);
> > > +}
> > > +
> > > /*
> > > - * Remove hugetlb folio from lists, and update dtor so that the folio appears
> > > - * as just a compound page.
> > > + * Remove hugetlb folio from lists.
> > > + * If vmemmap exists for the folio, update dtor so that the folio appears
> > > + * as just a compound page. Otherwise, wait until after allocating vmemmap
> > > + * to update dtor.
> > > *
> > > * A reference is held on the folio, except in the case of demote.
> > > *
> > > @@ -1612,31 +1640,19 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio,
> > > }
> > >
> > > /*
> > > - * Very subtle
> > > - *
> > > - * For non-gigantic pages set the destructor to the normal compound
> > > - * page dtor. This is needed in case someone takes an additional
> > > - * temporary ref to the page, and freeing is delayed until they drop
> > > - * their reference.
> > > - *
> > > - * For gigantic pages set the destructor to the null dtor. This
> > > - * destructor will never be called. Before freeing the gigantic
> > > - * page destroy_compound_gigantic_folio will turn the folio into a
> > > - * simple group of pages. After this the destructor does not
> > > - * apply.
> > > - *
> > > - * This handles the case where more than one ref is held when and
> > > - * after update_and_free_hugetlb_folio is called.
> > > - *
> > > - * In the case of demote we do not ref count the page as it will soon
> > > - * be turned into a page of smaller size.
> > > + * We can only clear the hugetlb destructor after allocating vmemmap
> > > + * pages. Otherwise, someone (memory error handling) may try to write
> > > + * to tail struct pages.
> > > + */
> > > + if (!folio_test_hugetlb_vmemmap_optimized(folio))
> > > + __clear_hugetlb_destructor(h, folio);
> > > +
> > > + /*
> > > + * In the case of demote we do not ref count the page as it will soon
> > > + * be turned into a page of smaller size.
> > > */
> > > if (!demote)
> > > folio_ref_unfreeze(folio, 1);
> > > - if (hstate_is_gigantic(h))
> > > - folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR);
> > > - else
> > > - folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR);
> > >
> > > h->nr_huge_pages--;
> > > h->nr_huge_pages_node[nid]--;
> > > @@ -1705,6 +1721,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
> > > {
> > > int i;
> > > struct page *subpage;
> > > + bool clear_dtor = folio_test_hugetlb_vmemmap_optimized(folio);
> >
> > Can this test on vmemmap_optimized still tell us if we should
> > __clear_hugetlb_destructor? From my reading:
> > 1. If a hugetlb folio is still vmemmap optimized in
> > __remove_hugetlb_folio, __remove_hugetlb_folio won't
> > __clear_hugetlb_destructor.
> > 2. Then hugetlb_vmemmap_restore in dissolve_free_huge_page will clear
> > HPG_vmemmap_optimized if it succeeds.
> > 3. Now when dissolve_free_huge_page gets into
> > __update_and_free_hugetlb_folio, we will see clear_dtor to be false
> > and __clear_hugetlb_destructor won't be called.
>
> Good catch! That is indeed a problem with this patch.
Glad that I could help.
>
> >
> > Or maybe I misunderstood, and what you really want to do is never
> > __clear_hugetlb_destructor so that folio_test_hugetlb is always true?
>
> No, that was a bug with this patch.
>
> We could ALWAYS wait until __update_and_free_hugetlb_folio to clear the
> hugetlb destructor. However, we have to take hugetlb lock to clear it.
> If the page does not have vmemmap optimized, the we can clear the
> destructor earlier in __remove_hugetlb_folio and avoid the lock/unlock
> cycle. In the past, we have had complaints about the time required to
> allocate and free a large quantity of hugetlb pages. Most of that time
> is spent in the low level allocators. However, I do not want to add
> something like an extra lock/unlock cycle unless absolutely necessary.
>
> I'll try to think of a cleaner and more fool proof way to address this.
>
> IIUC, this is an existing issue. Your patch series does not depend
> this being fixed.
Thanks Mike, I was about to send out V2 today.
> --
> Mike Kravetz
>
> >
> > >
> > > if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> > > return;
> > > @@ -1735,6 +1752,16 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
> > > if (unlikely(folio_test_hwpoison(folio)))
> > > folio_clear_hugetlb_hwpoison(folio);
> > >
> > > + /*
> > > + * If vmemmap pages were allocated above, then we need to clear the
> > > + * hugetlb destructor under the hugetlb lock.
> > > + */
> > > + if (clear_dtor) {
> > > + spin_lock_irq(&hugetlb_lock);
> > > + __clear_hugetlb_destructor(h, folio);
> > > + spin_unlock_irq(&hugetlb_lock);
> > > + }
> > > +
> > > for (i = 0; i < pages_per_huge_page(h); i++) {
> > > subpage = folio_page(folio, i);
> > > subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
> > > --
> > > 2.41.0
> > >
Powered by blists - more mailing lists