[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4489afcd-be3e-7830-4e37-03abe454486a@oracle.com>
Date: Thu, 29 Apr 2021 15:23:44 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Muchun Song <songmuchun@...edance.com>
Cc: Jonathan Corbet <corbet@....net>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, bp@...en8.de,
X86 ML <x86@...nel.org>, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org,
Peter Zijlstra <peterz@...radead.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andrew Morton <akpm@...ux-foundation.org>, paulmck@...nel.org,
pawan.kumar.gupta@...ux.intel.com,
Randy Dunlap <rdunlap@...radead.org>, oneukum@...e.com,
anshuman.khandual@....com, jroedel@...e.de,
Mina Almasry <almasrymina@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Oscar Salvador <osalvador@...e.de>,
Michal Hocko <mhocko@...e.com>,
"Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>,
David Hildenbrand <david@...hat.com>,
HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>,
Joao Martins <joao.m.martins@...cle.com>,
Xiongchun duan <duanxiongchun@...edance.com>,
fam.zheng@...edance.com, zhengqi.arch@...edance.com,
linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [External] Re: [PATCH v21 0/9] Free some vmemmap pages of HugeTLB
page
On 4/28/21 9:02 PM, Muchun Song wrote:
> On Thu, Apr 29, 2021 at 10:32 AM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>>
>> On 4/28/21 5:26 AM, Muchun Song wrote:
>>> On Wed, Apr 28, 2021 at 7:47 AM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>>>>
>>>> Thanks! I will take a look at the modifications soon.
>>>>
>>>> I applied the patches to Andrew's mmotm-2021-04-21-23-03, ran some tests and
>>>> got the following warning. We may need to special case that call to
>>>> __prep_new_huge_page/free_huge_page_vmemmap from alloc_and_dissolve_huge_page
>>>> as it is holding hugetlb lock with IRQs disabled.
>>>
>>> Good catch. Thanks Mike. I will fix it in the next version. How about this:
>>>
>>> @@ -1618,7 +1617,8 @@ static void __prep_new_huge_page(struct hstate
>>> *h, struct page *page)
>>>
>>> static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
>>> {
>>> + free_huge_page_vmemmap(h, page);
>>> __prep_new_huge_page(page);
>>> spin_lock_irq(&hugetlb_lock);
>>> __prep_account_new_huge_page(h, nid);
>>> spin_unlock_irq(&hugetlb_lock);
>>> @@ -2429,6 +2429,7 @@ static int alloc_and_dissolve_huge_page(struct
>>> hstate *h, struct page *old_page,
>>> if (!new_page)
>>> return -ENOMEM;
>>>
>>> + free_huge_page_vmemmap(h, new_page);
>>> retry:
>>> spin_lock_irq(&hugetlb_lock);
>>> if (!PageHuge(old_page)) {
>>> @@ -2489,7 +2490,7 @@ static int alloc_and_dissolve_huge_page(struct
>>> hstate *h, struct page *old_page,
>>>
>>> free_new:
>>> spin_unlock_irq(&hugetlb_lock);
>>> - __free_pages(new_page, huge_page_order(h));
>>> + update_and_free_page(h, new_page, false);
>>>
>>> return ret;
>>> }
>>>
>>>
>>
>> Another option would be to leave the prep* routines as is and only
>> modify alloc_and_dissolve_huge_page as follows:
>
> OK. LGTM. I will use this. Thanks Mike.
There are issues with my suggested patch below. I am occasionally
hitting the BUG that checks for page ref count being zero at put_page
time. Still do not fully understand, but I do not hit the same BUG
with your patch above. Please do not use my patch below.
--
Mike Kravetz
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 9c617c19fc18..f8e5013a6b46 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2420,14 +2420,15 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>>
>> /*
>> * Before dissolving the page, we need to allocate a new one for the
>> - * pool to remain stable. Using alloc_buddy_huge_page() allows us to
>> - * not having to deal with prep_new_huge_page() and avoids dealing of any
>> - * counters. This simplifies and let us do the whole thing under the
>> - * lock.
>> + * pool to remain stable. Here, we allocate the page and 'prep' it
>> + * by doing everything but actually updating counters and adding to
>> + * the pool. This simplifies and let us do most of the processing
>> + * under the lock.
>> */
>> new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL);
>> if (!new_page)
>> return -ENOMEM;
>> + __prep_new_huge_page(h, new_page);
>>
>> retry:
>> spin_lock_irq(&hugetlb_lock);
>> @@ -2473,7 +2474,6 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>> * Reference count trick is needed because allocator gives us
>> * referenced page but the pool requires pages with 0 refcount.
>> */
>> - __prep_new_huge_page(h, new_page);
>> __prep_account_new_huge_page(h, nid);
>> page_ref_dec(new_page);
>> enqueue_huge_page(h, new_page);
>> @@ -2489,7 +2489,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>>
>> free_new:
>> spin_unlock_irq(&hugetlb_lock);
>> - __free_pages(new_page, huge_page_order(h));
>> + update_and_free_page(h, old_page, false);
>>
>> return ret;
>> }
>>
>> --
>> Mike Kravetz
Powered by blists - more mailing lists