[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fe57ef80-bbdb-44dc-97d9-b390778430a4@redhat.com>
Date: Fri, 20 Dec 2024 17:30:24 +0100
From: David Hildenbrand <david@...hat.com>
To: Ge Yang <yangge1116@....com>, akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, stable@...r.kernel.org,
21cnbao@...il.com, baolin.wang@...ux.alibaba.com, muchun.song@...ux.dev,
liuzixing@...on.cn, Oscar Salvador <osalvador@...e.de>,
Michal Hocko <mhocko@...nel.org>
Subject: Re: [PATCH] replace free hugepage folios after migration
On 20.12.24 09:56, Ge Yang wrote:
>
>
> 在 2024/12/20 0:40, David Hildenbrand 写道:
>> On 18.12.24 07:33, yangge1116@....com wrote:
>>> From: yangge <yangge1116@....com>
>>
>> CCing Oscar, who worked on migrating these pages during memory offlining
>> and alloc_contig_range().
>>
>>>
>>> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
>>> have configured each NUMA node with 16GB of CMA and 16GB of in-use
>>> hugetlb pages. The allocation of contiguous memory via the
>>> cma_alloc() function can fail probabilistically.
>>>
>>> The cma_alloc() function may fail if it sees an in-use hugetlb page
>>> within the allocation range, even if that page has already been
>>> migrated. When in-use hugetlb pages are migrated, they may simply
>>> be released back into the free hugepage pool instead of being
>>> returned to the buddy system. This can cause the
>>> test_pages_isolated() function check to fail, ultimately leading
>>> to the failure of the cma_alloc() function:
>>> cma_alloc()
>>> __alloc_contig_migrate_range() // migrate in-use hugepage
>>> test_pages_isolated()
>>> __test_page_isolated_in_pageblock()
>>> PageBuddy(page) // check if the page is in buddy
>>
>> I thought this would be working as expected, at least we tested it with
>> alloc_contig_range / virtio-mem a while ago.
>>
>> On the memory_offlining path, we migrate hugetlb folios, but also
>> dissolve any remaining free folios even if it means that we will going
>> below the requested number of hugetlb pages in our pool.
>>
>> During alloc_contig_range(), we only migrate them, to then free them up
>> after migration.
>>
>> Under which circumstances doe sit apply that "they may simply be
>> released back into the free hugepage pool instead of being returned to
>> the buddy system"?
>>
>
> After migration, in-use hugetlb pages are only released back to the
> hugetlb pool and are not returned to the buddy system.
We had
commit ae37c7ff79f1f030e28ec76c46ee032f8fd07607
Author: Oscar Salvador <osalvador@...e.de>
Date: Tue May 4 18:35:29 2021 -0700
mm: make alloc_contig_range handle in-use hugetlb pages
alloc_contig_range() will fail if it finds a HugeTLB page within the
range, without a chance to handle them. Since HugeTLB pages can be
migrated as any LRU or Movable page, it does not make sense to bail out
without trying. Enable the interface to recognize in-use HugeTLB pages so
we can migrate them, and have much better chances to succeed the call.
And I am trying to figure out if it never worked correctly, or if
something changed that broke it.
In start_isolate_page_range()->isolate_migratepages_block(), we do the
ret = isolate_or_dissolve_huge_page(page, &cc->migratepages);
to add these folios to the cc->migratepages list.
In __alloc_contig_migrate_range(), we migrate the pages using migrate_pages().
After that, the src hugetlb folios should still be isolated? But I'm getting
confused when these pages get un-silated and putback to hugetlb/freed.
>
> The specific steps for reproduction are as follows:
> 1,Reserve hugetlb pages. Some of these hugetlb pages are allocated
> within the CMA area.
> echo 10240 > /proc/sys/vm/nr_hugepages
>
> 2,To ensure that hugetlb pages are in an in-use state, we can use the
> following command.
> qemu-system-x86_64 \
> -mem-prealloc \
> -mem-path /dev/hugepage/ \
> ...
>
> 3,At this point, using cma_alloc() to allocate contiguous memory may
> result in a probable failure.
>
Will these free hugetlb folios become surplus pages? I would have assumed
they get freed immediately to the buddy, or does you config maybe allow for
surplus pages?
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists