[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADrL8HVfBCmNYL5pM_LyHQNWyvihK=_GFnbtWB0tpTuLgiDybw@mail.gmail.com>
Date: Tue, 18 Jul 2023 09:31:11 -0700
From: James Houghton <jthoughton@...gle.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Jiaqi Yan <jiaqiyan@...gle.com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Muchun Song <songmuchun@...edance.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v2 2/2] hugetlb: optimize update_and_free_pages_bulk to
avoid lock cycles
On Mon, Jul 17, 2023 at 5:50 PM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>
> update_and_free_pages_bulk is designed to free a list of hugetlb pages
> back to their associated lower level allocators. This may require
> allocating vmemmmap pages associated with each hugetlb page. The
> hugetlb page destructor must be changed before pages are freed to lower
> level allocators. However, the destructor must be changed under the
> hugetlb lock. This means there is potentially one lock cycle per page.
>
> Minimize the number of lock cycles in update_and_free_pages_bulk by:
> 1) allocating necessary vmemmap for all hugetlb pages on the list
> 2) take hugetlb lock and clear destructor for all pages on the list
> 3) free all pages on list back to low level allocators
>
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
> mm/hugetlb.c | 38 ++++++++++++++++++++++++++++++++++----
> 1 file changed, 34 insertions(+), 4 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 4a910121a647..e6b780291539 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1856,13 +1856,43 @@ static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio,
> static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list)
> {
> struct page *page, *t_page;
> - struct folio *folio;
> + bool clear_dtor = false;
>
> + /*
> + * First allocate required vmemmmap for all pages on list. If vmemmap
> + * can not be allocated, we can not free page to lower level allocator,
> + * so add back as hugetlb surplus page.
> + */
> list_for_each_entry_safe(page, t_page, list, lru) {
> - folio = page_folio(page);
> - update_and_free_hugetlb_folio(h, folio, false);
> - cond_resched();
> + if (HPageVmemmapOptimized(page)) {
> + if (hugetlb_vmemmap_restore(h, page)) {
> + spin_lock_irq(&hugetlb_lock);
> + add_hugetlb_folio(h, page_folio(page), true);
> + spin_unlock_irq(&hugetlb_lock);
> + } else
> + clear_dtor = true;
> + cond_resched();
> + }
> + }
> +
> + /*
> + * If vmemmmap allocation performed above, then take lock to clear
s/vmemmmap/vmemmap. Also is a little hard to understand, something
like "If vmemmap allocation was performed above for any folios,
then..." seems clearer to me.
> + * destructor of all pages on list.
> + */
> + if (clear_dtor) {
> + spin_lock_irq(&hugetlb_lock);
> + list_for_each_entry(page, list, lru)
> + __clear_hugetlb_destructor(h, page_folio(page));
> + spin_unlock_irq(&hugetlb_lock);
> }
I'm not too familiar with this code, but the above block seems weird
to me. If we successfully allocated the vmemmap for *any* folio, we
clear the hugetlb destructor for all the folios? I feel like we should
only be clearing the hugetlb destructor for all folios if the vmemmap
allocation succeeded for *all* folios. If the code is functionally
correct as is, I'm a little bit confused why we need `clear_dtor`; it
seems like this function doesn't really need it. (I could have some
huge misunderstanding here.)
> +
> + /*
> + * Free pages back to low level allocators. vmemmap and destructors
> + * were taken care of above, so update_and_free_hugetlb_folio will
> + * not need to take hugetlb lock.
> + */
> + list_for_each_entry_safe(page, t_page, list, lru)
> + update_and_free_hugetlb_folio(h, page_folio(page), false);
> }
>
> struct hstate *size_to_hstate(unsigned long size)
> --
> 2.41.0
>
Powered by blists - more mailing lists