[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5125772b-2939-e71f-da4a-374cb74c9061@oracle.com>
Date: Tue, 11 Aug 2020 16:25:01 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Wei Yang <richard.weiyang@...ux.alibaba.com>
Cc: Michal Hocko <mhocko@...e.com>, Baoquan He <bhe@...hat.com>,
akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/10] mm/hugetlb: not necessary to abuse temporary page
to workaround the nasty free_huge_page
On 8/11/20 4:19 PM, Wei Yang wrote:
> On Tue, Aug 11, 2020 at 02:43:28PM -0700, Mike Kravetz wrote:
>> Subject: [PATCH] hugetlb: optimize race error return in
>> alloc_surplus_huge_page
>>
>> The routine alloc_surplus_huge_page() could race with with a pool
>> size change. If this happens, the allocated page may not be needed.
>> To free the page, the current code will 'Abuse temporary page to
>> workaround the nasty free_huge_page codeflow'. Instead, directly
>> call the low level routine that free_huge_page uses. This works
>> out well because the page is new, we hold the only reference and
>> already hold the hugetlb_lock.
>>
>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>> ---
>> mm/hugetlb.c | 13 ++++++++-----
>> 1 file changed, 8 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 590111ea6975..ac89b91fba86 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1923,14 +1923,17 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask,
>> /*
>> * We could have raced with the pool size change.
>> * Double check that and simply deallocate the new page
>> - * if we would end up overcommiting the surpluses. Abuse
>> - * temporary page to workaround the nasty free_huge_page
>> - * codeflow
>> + * if we would end up overcommiting the surpluses.
>> */
>> if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) {
>> - SetPageHugeTemporary(page);
>> + /*
>> + * Since this page is new, we hold the only reference, and
>> + * we already hold the hugetlb_lock call the low level free
>> + * page routine. This saves at least a lock roundtrip.
>
> The change looks good to me, while I may not understand the "lock roundtrip".
> You mean we don't need to release the hugetlb_lock?
Correct.
Normally we would free the page via free_huge_page() processing. To do that
we need to drop hugetlb_lock and call put_page/free_huge_page which will
need to acquire the hugetlb_lock again.
--
Mike Kravetz
>
>> + */
>> + (void)put_page_testzero(page); /* don't call destructor */
>> + update_and_free_page(h, page);
>> spin_unlock(&hugetlb_lock);
>> - put_page(page);
>> return NULL;
>> } else {
>> h->surplus_huge_pages++;
>> --
>> 2.25.4
Powered by blists - more mailing lists