[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <992DA160-BF2B-4D38-B76C-481F49155652@nvidia.com>
Date: Wed, 26 Nov 2025 11:59:34 -0500
From: Zi Yan <ziy@...dia.com>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>, Lance Yang <lance.yang@...ux.dev>,
Miaohe Lin <linmiaohe@...wei.com>, Naoya Horiguchi <nao.horiguchi@...il.com>,
Wei Yang <richard.weiyang@...il.com>, Balbir Singh <balbirs@...dia.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/4] mm/huge_memory: replace can_split_folio() with
direct refcount calculation
On 26 Nov 2025, at 4:56, David Hildenbrand (Red Hat) wrote:
>> static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int new_order,
>> struct page *split_at, struct xa_state *xas,
>> struct address_space *mapping, bool do_lru,
>> struct list_head *list, enum split_type split_type,
>> - pgoff_t end, int *nr_shmem_dropped, int extra_pins)
>> + pgoff_t end, int *nr_shmem_dropped)
>> {
>> struct folio *end_folio = folio_next(folio);
>> struct folio *new_folio, *next;
>> @@ -3782,7 +3773,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n
>> VM_WARN_ON_ONCE(!mapping && end);
>> /* Prevent deferred_split_scan() touching ->_refcount */
>> ds_queue = folio_split_queue_lock(folio);
>> - if (folio_ref_freeze(folio, 1 + extra_pins)) {
>> + if (folio_ref_freeze(folio, folio_cache_ref_count(folio) + 1)) {
>> struct swap_cluster_info *ci = NULL;
>> struct lruvec *lruvec;
>> int expected_refs;
>> @@ -3853,7 +3844,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n
>> zone_device_private_split_cb(folio, new_folio);
>> - expected_refs = folio_expected_ref_count(new_folio) + 1;
>> + expected_refs = folio_cache_ref_count(new_folio) + 1;
>> folio_ref_unfreeze(new_folio, expected_refs);
>> if (do_lru)
>> @@ -3897,7 +3888,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n
>> * Otherwise, a parallel folio_try_get() can grab @folio
>> * and its caller can see stale page cache entries.
>> */
>> - expected_refs = folio_expected_ref_count(folio) + 1;
>> + expected_refs = folio_cache_ref_count(folio) + 1;
>> folio_ref_unfreeze(folio, expected_refs);
>
> Can we just get rid of the expected_refs variable as well?
OK. Will update it.
>
> Apart from that LGTM, thanks!
>
> Acked-by: David Hildenbrand (Red Hat) <david@...nel.org>
Thanks.
Best Regards,
Yan, Zi
Powered by blists - more mailing lists