[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b8ef43c1-e4b5-eae2-0cdf-1ce25accc36f@oracle.com>
Date: Mon, 25 Jan 2021 15:25:35 -0800
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Muchun Song <songmuchun@...edance.com>,
David Hildenbrand <david@...hat.com>
Cc: David Rientjes <rientjes@...gle.com>,
Jonathan Corbet <corbet@....net>,
Thomas Gleixner <tglx@...utronix.de>, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org,
Peter Zijlstra <peterz@...radead.org>, viro@...iv.linux.org.uk,
Andrew Morton <akpm@...ux-foundation.org>, paulmck@...nel.org,
mchehab+huawei@...nel.org, pawan.kumar.gupta@...ux.intel.com,
Randy Dunlap <rdunlap@...radead.org>, oneukum@...e.com,
anshuman.khandual@....com, jroedel@...e.de,
Mina Almasry <almasrymina@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Oscar Salvador <osalvador@...e.de>,
Michal Hocko <mhocko@...e.com>,
"Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>,
HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>,
Xiongchun duan <duanxiongchun@...edance.com>,
linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [External] Re: [PATCH v13 05/12] mm: hugetlb: allocate the
vmemmap pages associated with each HugeTLB page
On 1/25/21 1:34 AM, Muchun Song wrote:
> On Mon, Jan 25, 2021 at 5:15 PM David Hildenbrand <david@...hat.com> wrote:
>>
>> On 25.01.21 08:41, Muchun Song wrote:
>>> On Mon, Jan 25, 2021 at 2:40 PM Muchun Song <songmuchun@...edance.com> wrote:
>>>>
>>>> On Mon, Jan 25, 2021 at 8:05 AM David Rientjes <rientjes@...gle.com> wrote:
>>>>>
>>>>>
>>>>> On Sun, 17 Jan 2021, Muchun Song wrote:
>>>>>
>>>>>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>>>>>> index ce4be1fa93c2..3b146d5949f3 100644
>>>>>> --- a/mm/sparse-vmemmap.c
>>>>>> +++ b/mm/sparse-vmemmap.c
>>>>>> @@ -29,6 +29,7 @@
>>>>>> #include <linux/sched.h>
>>>>>> #include <linux/pgtable.h>
>>>>>> #include <linux/bootmem_info.h>
>>>>>> +#include <linux/delay.h>
>>>>>>
>>>>>> #include <asm/dma.h>
>>>>>> #include <asm/pgalloc.h>
>>>>>> @@ -40,7 +41,8 @@
>>>>>> * @remap_pte: called for each non-empty PTE (lowest-level) entry.
>>>>>> * @reuse_page: the page which is reused for the tail vmemmap pages.
>>>>>> * @reuse_addr: the virtual address of the @reuse_page page.
>>>>>> - * @vmemmap_pages: the list head of the vmemmap pages that can be freed.
>>>>>> + * @vmemmap_pages: the list head of the vmemmap pages that can be freed
>>>>>> + * or is mapped from.
>>>>>> */
>>>>>> struct vmemmap_remap_walk {
>>>>>> void (*remap_pte)(pte_t *pte, unsigned long addr,
>>>>>> @@ -50,6 +52,10 @@ struct vmemmap_remap_walk {
>>>>>> struct list_head *vmemmap_pages;
>>>>>> };
>>>>>>
>>>>>> +/* The gfp mask of allocating vmemmap page */
>>>>>> +#define GFP_VMEMMAP_PAGE \
>>>>>> + (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN | __GFP_THISNODE)
>>>>>> +
>>>>>
>>>>> This is unnecessary, just use the gfp mask directly in allocator.
>>>>
>>>> Will do. Thanks.
>>>>
>>>>>
>>>>>> static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
>>>>>> unsigned long end,
>>>>>> struct vmemmap_remap_walk *walk)
>>>>>> @@ -228,6 +234,75 @@ void vmemmap_remap_free(unsigned long start, unsigned long end,
>>>>>> free_vmemmap_page_list(&vmemmap_pages);
>>>>>> }
>>>>>>
>>>>>> +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
>>>>>> + struct vmemmap_remap_walk *walk)
>>>>>> +{
>>>>>> + pgprot_t pgprot = PAGE_KERNEL;
>>>>>> + struct page *page;
>>>>>> + void *to;
>>>>>> +
>>>>>> + BUG_ON(pte_page(*pte) != walk->reuse_page);
>>>>>> +
>>>>>> + page = list_first_entry(walk->vmemmap_pages, struct page, lru);
>>>>>> + list_del(&page->lru);
>>>>>> + to = page_to_virt(page);
>>>>>> + copy_page(to, (void *)walk->reuse_addr);
>>>>>> +
>>>>>> + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
>>>>>> +}
>>>>>> +
>>>>>> +static void alloc_vmemmap_page_list(struct list_head *list,
>>>>>> + unsigned long start, unsigned long end)
>>>>>> +{
>>>>>> + unsigned long addr;
>>>>>> +
>>>>>> + for (addr = start; addr < end; addr += PAGE_SIZE) {
>>>>>> + struct page *page;
>>>>>> + int nid = page_to_nid((const void *)addr);
>>>>>> +
>>>>>> +retry:
>>>>>> + page = alloc_pages_node(nid, GFP_VMEMMAP_PAGE, 0);
>>>>>> + if (unlikely(!page)) {
>>>>>> + msleep(100);
>>>>>> + /*
>>>>>> + * We should retry infinitely, because we cannot
>>>>>> + * handle allocation failures. Once we allocate
>>>>>> + * vmemmap pages successfully, then we can free
>>>>>> + * a HugeTLB page.
>>>>>> + */
>>>>>> + goto retry;
>>>>>
>>>>> Ugh, I don't think this will work, there's no guarantee that we'll ever
>>>>> succeed and now we can't free a 2MB hugepage because we cannot allocate a
>>>>> 4KB page. We absolutely have to ensure we make forward progress here.
>>>>
>>>> This can trigger a OOM when there is no memory and kill someone to release
>>>> some memory. Right?
>>>>
>>>>>
>>>>> We're going to be freeing the hugetlb page after this succeeeds, can we
>>>>> not use part of the hugetlb page that we're freeing for this memory
>>>>> instead?
>>>>
>>>> It seems a good idea. We can try to allocate memory firstly, if successful,
>>>> just use the new page to remap (it can reduce memory fragmentation).
>>>> If not, we can use part of the hugetlb page to remap. What's your opinion
>>>> about this?
>>>
>>> If the HugeTLB page is a gigantic page which is allocated from
>>> CMA. In this case, we cannot use part of the hugetlb page to remap.
>>> Right?
>>
>> Right; and I don't think the "reuse part of a huge page as vmemmap while
>> freeing, while that part itself might not have a proper vmemmap yet (or
>> might cover itself now)" is particularly straight forward. Maybe I'm
>> wrong :)
>>
>> Also, watch out for huge pages on ZONE_MOVABLE, in that case you also
>> shouldn't allocate the vmemmap from there ...
>
> Yeah, you are right. So I tend to trigger OOM to kill other processes to
> reclaim some memory when we allocate memory fails.
IIUC, even non-gigantic hugetlb pages can exist in CMA. They can be migrated
out of CMA if needed (except free pages in the pool, but that is a separate
issue David H already noted in another thread).
When we first started discussing this patch set, one suggestion was to force
hugetlb pool pages to be allocated at boot time and never permit them to be
freed back to the buddy allocator. A primary reason for the suggestion was
to avoid this issue of needing to allocate memory when freeing a hugetlb page
to buddy. IMO, that would be an unreasonable restriction for many existing
hugetlb use cases.
A simple thought is that we simply fail the 'freeing hugetlb page to buddy'
if we can not allocate the required vmemmap pages. However, as David R says
freeing hugetlb pages to buddy is a reasonable way to free up memory in oom
situations. However, failing the operation 'might' be better than looping
forever trying to allocate the pages needed? As mentioned in the previous
patch, it would be better to use GFP_ATOMIC to at least dip into reserves if
we can.
I think using pages of the hugetlb for vmemmap to cover pages of the hugetlb
is the only way we can guarantee success of freeing a hugetlb page to buddy.
However, this should only only be used when there is no other option and could
result in vmemmap pages residing in CMA or ZONE_MOVABLE. I'm not sure how
much better this is than failing the free to buddy operation.
I don't have a solution. Just wanted to share some thoughts.
BTW, just thought of something else. Consider offlining a memory section that
contains a free hugetlb page. The offline code will try to disolve the hugetlb
page (free to buddy). So, vmemmap pages will need to be allocated. We will
try to allocate vmemap pages on the same node as the hugetlb page. But, if
this memory section is the last of the node all the pages will have been
isolated and no allocations will succeed. Is that a possible scenario, or am
I just having too many negative thoughts?
--
Mike Kravetz
Powered by blists - more mailing lists