[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <B393CCAB-0398-4BE9-B015-BFACEF603FE5@nvidia.com>
Date: Tue, 25 Nov 2025 10:41:53 -0500
From: Zi Yan <ziy@...dia.com>
To: Balbir Singh <balbirs@...dia.com>,
"David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>, Lance Yang <lance.yang@...ux.dev>,
Miaohe Lin <linmiaohe@...wei.com>, Naoya Horiguchi <nao.horiguchi@...il.com>,
Wei Yang <richard.weiyang@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/4] mm/huge_memory: replace can_split_folio() with
direct refcount calculation
On 25 Nov 2025, at 3:55, David Hildenbrand (Red Hat) wrote:
> On 11/24/25 23:14, Balbir Singh wrote:
>> On 11/22/25 13:55, Zi Yan wrote:
>>> can_split_folio() is just a refcount comparison, making sure only the
>>> split caller holds an extra pin. Open code it with
>>> folio_expected_ref_count() != folio_ref_count() - 1. For the extra_pins
>>> used by folio_ref_freeze(), add folio_cache_references() to calculate it.
>>>
>>> Suggested-by: David Hildenbrand (Red Hat) <david@...nel.org>
>>> Signed-off-by: Zi Yan <ziy@...dia.com>
>>> ---
>>> include/linux/huge_mm.h | 1 -
>>> mm/huge_memory.c | 43 ++++++++++++++++-------------------------
>>> mm/vmscan.c | 3 ++-
>>> 3 files changed, 19 insertions(+), 28 deletions(-)
>>>
>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>> index 97686fb46e30..1ecaeccf39c9 100644
>>> --- a/include/linux/huge_mm.h
>>> +++ b/include/linux/huge_mm.h
>>> @@ -369,7 +369,6 @@ enum split_type {
>>> SPLIT_TYPE_NON_UNIFORM,
>>> };
>>> -bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
>>> int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>>> unsigned int new_order);
>>> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index c1f1055165dd..6c821c1c0ac3 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -3455,23 +3455,6 @@ static void lru_add_split_folio(struct folio *folio, struct folio *new_folio,
>>> }
>>> }
>>> -/* Racy check whether the huge page can be split */
>>> -bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
>>> -{
>>> - int extra_pins;
>>> -
>>> - /* Additional pins from page cache */
>>> - if (folio_test_anon(folio))
>>> - extra_pins = folio_test_swapcache(folio) ?
>>> - folio_nr_pages(folio) : 0;
>>> - else
>>> - extra_pins = folio_nr_pages(folio);
>>> - if (pextra_pins)
>>> - *pextra_pins = extra_pins;
>>> - return folio_mapcount(folio) == folio_ref_count(folio) - extra_pins -
>>> - caller_pins;
>>> -}
>>> -
>>> static bool page_range_has_hwpoisoned(struct page *page, long nr_pages)
>>> {
>>> for (; nr_pages; page++, nr_pages--)
>>> @@ -3776,17 +3759,26 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
>>> return 0;
>>> }
>>> +/* Number of folio references from the pagecache or the swapcache. */
>>> +static unsigned int folio_cache_references(const struct folio *folio)
>>
>> folio_cache_ref_count?
>
> Yes, makes sense.
>
>>
>>> +{
>>> + if (folio_test_anon(folio) && !folio_test_swapcache(folio))
>>> + return 0;
>>> + return folio_nr_pages(folio);
>>> +}
>>> +>
>> Does this belong to include/linux/mm.h with the other helpers?
>
> Not for now I think, in particular, as we require earlier !folio->mapping checks to give a correct answer. Most people should be using folio_expected_ref_count().
>
Got it. Will use folio_cache_ref_count() in the next version.
Best Regards,
Yan, Zi
Powered by blists - more mailing lists