[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4015AB29-76AD-4CA7-89C4-5916E2F8670B@linux.dev>
Date: Wed, 19 Jul 2023 10:34:36 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Jiaqi Yan <jiaqiyan@...gle.com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Muchun Song <songmuchun@...edance.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
James Houghton <jthoughton@...gle.com>,
Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
stable@...r.kernel.org
Subject: Re: [PATCH v2 1/2] hugetlb: Do not clear hugetlb dtor until
allocating vmemmap
> On Jul 18, 2023, at 08:49, Mike Kravetz <mike.kravetz@...cle.com> wrote:
>
> Freeing a hugetlb page and releasing base pages back to the underlying
> allocator such as buddy or cma is performed in two steps:
> - remove_hugetlb_folio() is called to remove the folio from hugetlb
> lists, get a ref on the page and remove hugetlb destructor. This
> all must be done under the hugetlb lock. After this call, the page
> can be treated as a normal compound page or a collection of base
> size pages.
> - update_and_free_hugetlb_folio() is called to allocate vmemmap if
> needed and the free routine of the underlying allocator is called
> on the resulting page. We can not hold the hugetlb lock here.
>
> One issue with this scheme is that a memory error could occur between
> these two steps. In this case, the memory error handling code treats
> the old hugetlb page as a normal compound page or collection of base
> pages. It will then try to SetPageHWPoison(page) on the page with an
> error. If the page with error is a tail page without vmemmap, a write
> error will occur when trying to set the flag.
>
> Address this issue by modifying remove_hugetlb_folio() and
> update_and_free_hugetlb_folio() such that the hugetlb destructor is not
> cleared until after allocating vmemmap. Since clearing the destructor
> requires holding the hugetlb lock, the clearing is done in
> remove_hugetlb_folio() if the vmemmap is present. This saves a
> lock/unlock cycle. Otherwise, destructor is cleared in
> update_and_free_hugetlb_folio() after allocating vmemmap.
>
> Note that this will leave hugetlb pages in a state where they are marked
> free (by hugetlb specific page flag) and have a ref count. This is not
> a normal state. The only code that would notice is the memory error
> code, and it is set up to retry in such a case.
>
> A subsequent patch will create a routine to do bulk processing of
> vmemmap allocation. This will eliminate a lock/unlock cycle for each
> hugetlb page in the case where we are freeing a large number of pages.
>
> Fixes: ad2fa3717b74 ("mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page")
> Cc: <stable@...r.kernel.org>
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
Reviewed-by: Muchun Song <songmuchun@...edance.com>
Thanks.
Powered by blists - more mailing lists