[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <A20FCC61-0809-4308-91F3-6F45E91FEF99@nvidia.com>
Date: Mon, 14 Jul 2025 11:33:20 -0400
From: Zi Yan <ziy@...dia.com>
To: David Hildenbrand <david@...hat.com>
Cc: Balbir Singh <balbirs@...dia.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>, Hugh Dickins <hughd@...gle.com>,
Kirill Shutemov <k.shutemov@...il.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] mm/huge_memory: move unrelated code out of
__split_unmapped_folio()
On 14 Jul 2025, at 11:30, David Hildenbrand wrote:
> On 11.07.25 20:23, Zi Yan wrote:
>> remap(), folio_ref_unfreeze(), lru_add_split_folio() are not relevant to
>> splitting unmapped folio operations. Move them out to the caller so that
>> __split_unmapped_folio() only handles unmapped folio splits. This makes
>> __split_unmapped_folio() reusable.
>>
>> Convert VM_BUG_ON(mapping) to use VM_WARN_ON_ONCE_FOLIO().
>>
>> Signed-off-by: Zi Yan <ziy@...dia.com>
>> ---
>
> [...]
>
>> - if (folio_test_swapcache(folio)) {
>> - VM_BUG_ON(mapping);
>> -
>> - /* a swapcache folio can only be uniformly split to order-0 */
>> - if (!uniform_split || new_order != 0)
>> - return -EINVAL;
>> -
>> - swap_cache = swap_address_space(folio->swap);
>> - xa_lock(&swap_cache->i_pages);
>> - }
>> -
>> if (folio_test_anon(folio))
>> mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>> - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
>> - lruvec = folio_lruvec_lock(folio);
>>
>
> Nit: now double empty line.
Will fix it.
>
>> folio_clear_has_hwpoisoned(folio);
>> @@ -3480,9 +3451,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> for (split_order = start_order;
>> split_order >= new_order && !stop_split;
>> split_order--) {
>> - int old_order = folio_order(folio);
>> - struct folio *release;
>> struct folio *end_folio = folio_next(folio);
>> + int old_order = folio_order(folio);
>> + struct folio *new_folio;
>> /* order-1 anonymous folio is not supported */
>> if (folio_test_anon(folio) && split_order == 1)
>> @@ -3517,113 +3488,34 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> after_split:
>> /*
>> - * Iterate through after-split folios and perform related
>> - * operations. But in buddy allocator like split, the folio
>> + * Iterate through after-split folios and update folio stats.
>> + * But in buddy allocator like split, the folio
>> * containing the specified page is skipped until its order
>> * is new_order, since the folio will be worked on in next
>> * iteration.
>> */
>> - for (release = folio; release != end_folio; release = next) {
>> - next = folio_next(release);
>> + for (new_folio = folio; new_folio != end_folio; new_folio = next) {
>> + next = folio_next(new_folio);
>> /*
>> - * for buddy allocator like split, the folio containing
>> - * page will be split next and should not be released,
>> - * until the folio's order is new_order or stop_split
>> - * is set to true by the above xas_split() failure.
>> + * for buddy allocator like split, new_folio containing
>> + * page could be split again, thus do not change stats
>> + * yet. Wait until new_folio's order is new_order or
>> + * stop_split is set to true by the above xas_split()
>> + * failure.
>> */
>> - if (release == page_folio(split_at)) {
>> - folio = release;
>> + if (new_folio == page_folio(split_at)) {
>> + folio = new_folio;
>> if (split_order != new_order && !stop_split)
>> continue;
>> }
>> - if (folio_test_anon(release)) {
>> - mod_mthp_stat(folio_order(release),
>> + if (folio_test_anon(new_folio)) {
>> + mod_mthp_stat(folio_order(new_folio),
>> MTHP_STAT_NR_ANON, 1);
>> }
>
> Nit: {} can be dropped
Sure.
>
> Code is still confusing, so could be that I miss something, but in general
> looks like an improvement to me.
>
> I think we can easily get rid of the goto label in __split_unmapped_folio() doing something like
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 14bc0b54cf9f0..db0ae957a0ba8 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3435,18 +3435,18 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> if (xas_error(xas)) {
> ret = xas_error(xas);
> stop_split = true;
> - goto after_split;
> }
> }
> }
> - folio_split_memcg_refs(folio, old_order, split_order);
> - split_page_owner(&folio->page, old_order, split_order);
> - pgalloc_tag_split(folio, old_order, split_order);
> + if (!stop_split) {
> + folio_split_memcg_refs(folio, old_order, split_order);
> + split_page_owner(&folio->page, old_order, split_order);
> + pgalloc_tag_split(folio, old_order, split_order);
> - __split_folio_to_order(folio, old_order, split_order);
> + __split_folio_to_order(folio, old_order, split_order);
> + }
> -after_split:
> /*
> * Iterate through after-split folios and update folio stats.
> * But in buddy allocator like split, the folio
>
Yep, looks much better to me. Let me fix it in V3. Thank you for the review
and suggestions.
Best Regards,
Yan, Zi
Powered by blists - more mailing lists